'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += '
'; window.popUpWin.document.write(zhtm); window.popUpWin.document.close(); // Johnny Jackson 4/28/98 } //--> Teach Yourself CORBA In 14 Days -- Ch 1 -- Getting Familiar with CORBA

Teach Yourself CORBA In 14 Days

Previous chapterNext chapterContents

Day 1
Getting Familiar with CORBA

The Purpose of This Book

Certainly, this isn't the first book to be written on the subject of the Common Object Request Broker Architecture (CORBA)--not by a long shot. However, among CORBA books currently on shelves, it might be unique in its approach. At the time this book was written, few, if any, texts were available that covered CORBA at an introductory level. This book attempts to fill that gap.

CORBA is not a subject for the fainthearted, to be sure. Although development tools that hide some of the complexity of CORBA exist, if you embark on a project to develop a reasonably sophisticated CORBA application, chances are that you will experience some of CORBA's complexities firsthand. However, though there might be a steep learning curve associated with CORBA, a working knowledge of CORBA fundamentals is well within the grasp of any competent programmer.

For the purposes of this book, it is assumed that you already have a good deal of programming experience. CORBA is a language-independent architecture, but because C++ and Java are the principal languages used to develop CORBA applications, it would be preferable if you had experience with one of these languages. (Most of the examples are written in C++, with a healthy dose of Java thrown in for good measure.) It wouldn't hurt if you were familiar with object-oriented analysis and design concepts either, but just in case you need a refresher, this book will help you review these concepts.

Operating under the assumption that learning CORBA is a surmountable (if daunting) goal for most programmers, this book begins teaching the fundamentals of CORBA, starting with an overview of the architecture. You'll then move on to a primer on the Interface Definition Language (IDL), a cornerstone on which most CORBA applications are based. After that, you'll start building CORBA applications, and before you know it, you'll be exposed to advanced concepts and design issues, along with other useful things such as CORBAservices, CORBAfacilities, and the Dynamic Invocation Interface, or DII (don't worry--you'll learn what all this means, in due time). All this--and more--in a mere 14 days.

What this book does not do--indeed, cannot do--is make you a CORBA expert overnight (or even in 14 days, for that matter). It does put you well on your way to mastering CORBA. Keep in mind that CORBA is a complex architecture, full of design issues and tradeoffs as well as implementation nuances. As such, it can only be mastered through experience--something you will gain only by designing and developing CORBA applications. Perhaps this book does not make you an expert in all things CORBA, but it does put you on the right track toward achieving that goal.

Background: History of Distributed Systems

If you're interested enough in CORBA to be reading this book, you probably know a thing or two already about distributed systems. Distributed systems have been around, in one form or another, for some time, although they haven't always been called that and they certainly haven't always had the flexibility that they do now. To discover where CORBA fits in, let's briefly review the history of distributed systems, starting with the venerable mainframe.

The Beginning: Monolithic Systems and Mainframes

In the beginning (or close to it), there was the mainframe. Along with it came hierarchical database systems and dumb terminals, also known as green screens. Mainframes usually cost a great deal to maintain but were capable of serving large numbers of users and had the advantage (or disadvantage, depending on one's point of view) of being centrally managed.

Software systems written for mainframes were often monolithic--that is, the user interface, business logic, and data access functionality were all contained in one large application. Because the dumb terminals used to access mainframes didn't do any of their own processing, the entire application ran in the mainframe itself, thus making the monolithic architecture reasonable. A typical monolithic application architecture is illustrated in Figure 1.1.

Figure 1.1. Typical monolithic application architecture.

The Revolution: Client/Server Architecture

The advent of the PC made possible a dramatic paradigm shift from the monolithic architecture of mainframe-based applications. Whereas these applications required the mainframe itself to perform all the processing, applications based on the client/server architecture allowed some of that processing to be offloaded to PCs on the users' desktops.

Along with the client/server revolution came the proliferation of UNIX-based servers. Many applications simply did not require the massive power of mainframes, and because the client/server architecture was capable of moving much of the processing load to the desktop PC, these smaller UNIX-based server machines were often more cost-effective than mainframes. Also, these machines were much more affordable to small businesses than mainframes, which were often simply out of reach for companies with relatively small bank account balances. Still another benefit was the empowerment of individual departments within an organization to deploy and manage their own servers. The result was that these departments could be more responsive to their specific needs when developing their own applications, rather than having to jump through proverbial hoops to get the department controlling the mainframes to develop applications, as was often the case. Finally, whereas terminals were typically restricted to running only applications on the mainframe, a PC was capable of performing many other tasks independently of the mainframe, further enhancing its usefulness as a desktop machine.

Client/server applications typically distributed the components of the application so that the database would reside on the server (whether a UNIX box or mainframe), the user interface would reside on the client, and the business logic would reside in either, or both, components. When changes were made to parts of the client component, new copies of the client component (usually executables or a set of executables) had to be distributed to each user.

With the advent of multitier client/server architecture (discussed in the next section), the "original" client/server architecture is now referred to as "two-tier" client/server. The two-tier client/server architecture is illustrated in Figure 1.2.

Figure 1.2. Two-tier client/server architecture.

The Evolution: Multitier Client/Server

The client/server architecture was in many ways a revolution from the old way of doing things. Despite solving the problems with mainframe-based applications, however, client/server was not without faults of its own. For example, because database access functionality (such as embedded database queries) and business logic were often contained in the client component, any changes to the business logic, database access, or even the database itself, often required the deployment of a new client component to all the users of the application. Usually, such changes would break earlier versions of the client component, resulting in a fragile application.

The problems with the traditional client/server (now often called "two-tier" client/server) were addressed by the multitier client/server architecture. Conceptually, an application can have any number of tiers, but the most popular multitier architecture is three-tier, which partitions the system into three logical tiers: the user interface layer, the business rules layer, and the database access layer. A three-tier client/server architecture is illustrated in Figure 1.3.

Figure 1.3. Three-tier client/server architecture.

Multitier client/server architecture enhances the two-tier client/server architecture in two ways: First, and perhaps most importantly, it makes the application less fragile by further insulating the client from changes in the rest of the application. Also, because the executable components are more fine-grained, it allows more flexibility in the deployment of an application.

Multitier client/server reduces application fragility by providing more insulation and separation between layers. The user interface layer communicates only with the business rules layer, never directly with the database access layer. The business rules layer, in turn, communicates with the user interface layer on one side and the database access layer on the other. Thus, changes in the database access layer will not affect the user interface layer because they are insulated from each other. This architecture enables changes to be made in the application with less likelihood of affecting the client component (which, remember, has to be redistributed when there are any changes to it).

Because the multitier client/server architecture partitions the application into more components than traditional two-tier client/server, it also allows more flexibility in deployment of the application. For example, Figure 1.3 depicts a system in which the business rules layer and database access layer, although they are separate logical entities, are on the same server machine. It is also possible to put each server component on a separate machine. Indeed, multiple business logic components (and multiple database access components, if multiple databases are being used) can be created for a single application, distributing the processing load and thus resulting in a more robust, scalable application.

Note: It is interesting to note that the multitier client/server architecture might actually have had its roots in mainframe applications. COBOL applications on IBM mainframes could define the user interface by using a tool called Message Format Service (MFS). MFS abstracted the terminal type (terminals could, for instance, have varying numbers of rows and columns) from the rest of the application. Similarly, applications could specify the database interfaces as well. Although the application would still run in one monolithic chunk, the available tools enabled the design of applications using a logical three-tier architecture.

The Next Generation: Distributed Systems

The next logical step in the evolution of application architectures is the distributed system model. This architecture takes the concept of multitier client/server to its natural conclusion. Rather than differentiate between business logic and data access, the distributed system model simply exposes all functionality of the application as objects, each of which can use any of the services provided by other objects in the system, or even objects in other systems. The architecture can also blur the distinction between "client" and "server" because the client components can also create objects that behave in server-like roles. The distributed system architecture provides the ultimate in flexibility.

The distributed system architecture achieves its flexibility by encouraging (or enforcing) the definition of specific component interfaces. The interface of a component specifies to other components what services are offered by that component and how they are used. As long as the interface of a component remains constant, that component's implementation can change dramatically without affecting other components. For example, a component that provides customer information for a company can store that information in a relational database. Later, the application designers might decide that an object-oriented database would be more appropriate. The designers can make any number of changes to the component's implementation--even sweeping changes such as using a different type of database--provided that they leave the component's interface intact. Again, as long as the interface of that component remains the same, the underlying implementation is free to change.

New Term: An interface defines the protocol of communication between two separate components of a system. (These components can be separate processes, separate objects, a user and an application--any separate entities that need to communicate with each other.) The interface describes what services are provided by a component and the protocol for using those services. In the case of an object, the interface can be thought of as the set of methods defined by that object, including the input and output parameters. An interface can be thought of as a contract; in a sense, the component providing an interface promises to honor requests for services as outlined in the interface.

Distributed systems are really multitier client/server systems in which the number of distinct clients and servers is potentially large. One important difference is that distributed systems generally provide additional services, such as directory services, which allow various components of the application to be located by others. Other services might include a transaction monitor service, which allows components to engage in transactions with each other.

New Term: Directory services refers to a set of services that enable objects--which can be servers, businesses, or even people--to be located by other objects. Not only can the objects being looked up differ in type, but the directory information itself can vary as well. For example, a telephone book would be used to locate telephone numbers and postal addresses; an email directory would be used to locate email addresses. Directory services encompass all such information, usually grouping together related information (for example, there are separate volumes of the yellow pages for different cities; contents of each volume are further divided into types of businesses).

New Term: A transaction monitor service oversees transactions on behalf of other objects. A transaction, in turn, is an operation or set of operations that must be performed atomically; that is, either all objects involved in the transaction must commit the transaction (update their own records) or all objects involved must abort the transaction (return to their original state before the transaction was initiated). The result is that whether a transaction commits or aborts, all involved objects will be in a consistent state. It is the job of a transaction monitor to provide transaction-related services to other objects.

To sum up, business applications have evolved over a period of time from a relatively rigid monolithic architecture to an extremely flexible, distributed one. Along the way, application architectures have offered increasing robustness because of the definitions of interfaces between components and the scalability of applications (furnished in part by the capability to replicate server components on different machines). Additionally, services have been introduced that enable the end user of an application to wade through the myriad of available services. Those who have been designing and developing business applications since the days of mainframes have certainly had an interesting ride.


So far, in this evolution of business applications from the monolithic mainframe architecture to the highly decentralized distributed architecture, no mention has been made of CORBA. Therefore, you might be asking yourself at this point where CORBA fits in to all this. The answer, as you will see, is emphasized throughout the rest of this book. Recall that distributed systems rely on the definition of interfaces between components and on the existence of various services (such as directory registration and lookup) available to an application. CORBA provides a standard mechanism for defining the interfaces between components as well as some tools to facilitate the implementation of those interfaces using the developer's choice of languages. In addition, the Object Management Group (the organization responsible for standardizing and promoting CORBA) specifies a wealth of standard services, such as directory and naming services, persistent object services, and transaction services. Each of these services is defined in a CORBA-compliant manner, so they are available to all CORBA applications. Finally, CORBA provides all the "plumbing" that allows various components of an application--or of separate applications--to communicate with each other.

New Term: The capabilities of CORBA don't stop there. Two features that CORBA provides--features that are a rarity in the computer software realm--are platform independence and language independence. Platform independence means that CORBA objects can be used on any platform for which there is a CORBA ORB implementation (this includes virtually all modern operating systems as well as some not-so-modern ones). Language independence means that CORBA objects and clients can be implemented in just about any programming language. Furthermore, CORBA objects need not know which language was used to implement other CORBA objects that they talk to. Soon you will see the components of the CORBA architecture that make platform independence and language independence possible.

Exploring CORBA Alternatives

When designing and implementing distributed applications, CORBA certainly isn't a developer's only choice. Other mechanisms exist by which such applications can be built. Depending on the nature of the application--ranging from its complexity to the platform(s) it runs on to the language(s) used to implement it--there are a number of alternatives for a developer to consider. In this section you'll briefly explore some of the alternatives and see how they compare to CORBA.

Socket Programming

New Term: In most modern systems, communication between machines, and sometimes between processes in the same machine, is done through the use of sockets. Simply put, a socket is a channel through which applications can connect with each other and communicate. The most straightforward way to communicate between application components, then, is to use sockets directly (this is known as socket programming), meaning that the developer writes data to and/or reads data from a socket.

The Application Programming Interface (API) for socket programming is rather low-level. As a result, the overhead associated with an application that communicates in this fashion is very low. However, because the API is low-level, socket programming is not well-suited to handling complex data types, especially when application components reside on different types of machines or are implemented in different programming languages. Whereas direct socket programming can result in very efficient applications, the approach is usually unsuitable for developing complex applications.

Remote Procedure Call (RPC)

New Term: One rung on the ladder above socket programming is Remote Procedure Call (RPC). RPC provides a function-oriented interface to socket-level communications. Using RPC, rather than directly manipulating the data that flows to and from a socket, the developer defines a function--much like those in a functional language such as C--and generates code that makes that function look like a normal function to the caller. Under the hood, the function actually uses sockets to communicate with a remote server, which executes the function and returns the result, again using sockets.

Because RPC provides a function-oriented interface, it is often much easier to use than raw socket programming. RPC is also powerful enough to be the basis for many client/server applications. Although there are varying incompatible implementations of RPC protocol, a standard RPC protocol exists that is readily available for most platforms.

OSF Distributed Computing Environment (DCE)

The Distributed Computing Environment (DCE), a set of standards pioneered by the Open Software Foundation (OSF), includes a standard for RPC. Although the DCE standard has been around for some time, and was probably a good idea, it has never gained wide acceptance and exists today as little more than an historical curiosity.

Microsoft Distributed Component Object Model (DCOM)

The Distributed Component Object Model (DCOM), Microsoft's entry into the distributed computing foray, offers capabilities similar to CORBA. DCOM is a relatively robust object model that enjoys particularly good support on Microsoft operating systems because it is integrated with Windows 95 and Windows NT. However, being a Microsoft technology, the availability of DCOM is sparse outside the realm of Windows operating systems. Microsoft is working to correct this disparity, however, in partnering with Software AG to provide DCOM on platforms other than Windows. At the time this was written, DCOM was available for the Sun Solaris operating system, with support promised for Digital UNIX, IBM MVS, and other operating systems by the end of the year. By the time you read this, some or all of these ports will be available. (More information on the ports of DCOM to other platforms is available at http://www.softwareag.com/corporat/dcom/default.htm.)

Microsoft has, on numerous occasions, made it clear that DCOM is best supported on Windows operating systems, so developers with cross-platform interests in mind would be well-advised to evaluate the capabilities of DCOM on their platform(s) of interest before committing to the use of this technology. However, for the development of Windows-only applications, it is difficult to imagine a distributed computing framework that better integrates with the Windows operating systems.

One interesting development concerning CORBA and DCOM is the availability of CORBA-DCOM bridges, which enable CORBA objects to communicate with DCOM objects and vice versa. Because of the "impedance mismatch" between CORBA and DCOM objects (meaning that there are inherent incompatibilities between the two that are difficult to reconcile), the CORBA-DCOM bridge is not a perfect solution, but it can prove useful in situations where both DCOM and CORBA objects might be used.

Java Remote Method Invocation (RMI)

The tour of exploring CORBA alternatives stops with Java Remote Method Invocation (RMI), a very CORBA-like architecture with a few twists. One advantage of RMI is that it supports the passing of objects by value, a feature not (currently) supported by CORBA. A disadvantage, however, is that RMI is a Java-only solution; that is, RMI clients and servers must be written in Java. For all-Java applications--particularly those that benefit from the capability to pass objects by value--RMI might be a good choice, but if there is a chance that the application will later need to interoperate with applications written in other languages, CORBA is a better choice. Fortunately, full CORBA implementations already exist for Java, ensuring that Java applications interoperate with the rest of the CORBA world.

CORBA History

Now that you know a little bit of CORBA's background and its reason for existence, it seems appropriate to briefly explore some of the history of CORBA to understand how it came into being.

Introducing the Object Management Group (OMG)

The Object Management Group (OMG), established in 1989 with eight original members, is a 760-plus-member organization whose charter is to "provide a common architectural framework for object-oriented applications based on widely available interface specifications." That's a rather tall order, but the OMG achieves its goals with the establishment of the Object Management Architecture (OMA), of which CORBA is a part. This set of standards delivers the common architectural framework on which applications are built. Very briefly, the OMA consists of the Object Request Broker (ORB) function, object services (known as CORBAservices), common facilities (known as CORBAfacilities), domain interfaces, and application objects. CORBA's role in the OMA is to implement the Object Request Broker function. For the majority of this book, you will be concentrating on CORBA itself, occasionally dabbling into CORBAservices and CORBAfacilities.


Following the OMG's formation in 1989, CORBA 1.0 was introduced and adopted in December 1990. It was followed in early 1991 by CORBA 1.1, which defined the Interface Definition Language (IDL) as well as the API for applications to communicate with an Object Request Broker (ORB). (These are concepts that you'll explore in much greater detail on Day 2.) A 1.2 revision appeared shortly before CORBA 2.0, which with its added features quickly eclipsed the 1.x revisions. The CORBA 1.x versions made an important first step toward object interoperability, allowing objects on different machines, on different architectures, and written in different languages to communicate with each other.

CORBA 2.0 and IIOP

CORBA 1.x was an important first step in providing distributed object interoperability, but it wasn't a complete specification. Although it provided standards for IDL and for accessing an ORB through an application, its chief limitation was that it did not specify a standard protocol through which ORBs could communicate with each other. As a result, a CORBA ORB from one vendor could not communicate with an ORB from another vendor, a restriction that severely limited interoperability among distributed objects.

Enter CORBA 2.0. Adopted in December 1994, CORBA 2.0's primary accomplishment was to define a standard protocol by which ORBs from various CORBA vendors could communicate. This protocol, known as the Internet Inter-ORB Protocol (IIOP, pronounced "eye-op"), is required to be implemented by all vendors who want to call their products CORBA 2.0 compliant. Essentially, IIOP ensures true interoperability among products from numerous vendors, thus enabling CORBA applications to be more vendor-independent. IIOP, being the Internet Inter-ORB Protocol, applies only to networks based on TCP/IP, which includes the Internet and most intranets.

The CORBA standard continues to evolve beyond 2.0; in September 1997, the 2.1 version became available, followed shortly by 2.2; 2.3 is expected in early 1998. (The OMG certainly is keeping itself busy!) These revisions introduce evolutionary (not revolutionary) advancements in the CORBA architecture.

CORBA Architecture Overview

Finally, having learned the history and reasons for the existence of CORBA, you're ready to examine the CORBA architecture. You'll cover the architecture in greater detail on Day 2, but Day 1 provides you with a very general overview--an executive summary, if you will--of what composes the CORBA architecture.

First of all, CORBA is an object-oriented architecture. CORBA objects exhibit many features and traits of other object-oriented systems, including interface inheritance and polymorphism. What makes CORBA even more interesting is that it provides this capability even when used with nonobject-oriented languages such as C and COBOL, although CORBA maps particularly well to object-oriented languages like C++ and Java.

New Term: Interface inheritance is a concept that should be familiar to Objective C and Java developers. In the contrasting implementation inheritance, an implementation unit (usually a class) can be derived from another. By comparison, interface inheritance allows an interface to be derived from another. Even though interfaces can be related through inheritance, the implementations for those interfaces need not be.

The Object Request Broker (ORB)

Fundamental to the Common Object Request Broker Architecture is the Object Request Broker, or ORB. (That the ORB acronym appears within the CORBA acronym was just too much to be coincidental.) An ORB is a software component whose purpose is to facilitate communication between objects. It does so by providing a number of capabilities, one of which is to locate a remote object, given an object reference. Another service provided by the ORB is the marshaling of parameters and return values to and from remote method invocations. (Don't worry if this explanation doesn't make sense; the ORB is explained in much greater detail on Day 2.) Recall that the Object Management Architecture (OMA) includes a provision for ORB functionality; CORBA is the standard that implements this ORB capability. You will soon see that the use of ORBs provides platform independence to distributed CORBA objects.

Interface Definition Language (IDL)

Another fundamental piece of the CORBA architecture is the use of the Interface Definition Language (IDL). IDL, which specifies interfaces between CORBA objects, is instrumental in ensuring CORBA's language independence. Because interfaces described in IDL can be mapped to any programming language, CORBA applications and components are thus independent of the language(s) used to implement them. In other words, a client written in C++ can communicate with a server written in Java, which in turn can communicate with another server written in COBOL, and so forth.

One important thing to remember about IDL is that it is not an implementation language. That is, you can't write applications in IDL. The sole purpose of IDL is to define interfaces; providing implementations for these interfaces is performed using some other language. When you study IDL more closely on Day 3, you'll learn more about this and other assorted facts about IDL.

The CORBA Communications Model

New Term: CORBA uses the notion of object references (which in CORBA/IIOP lingo are referred to as Interoperable Object References, or IORs) to facilitate the communication between objects. When a component of an application wants to access a CORBA object, it first obtains an IOR for that object. Using the IOR, the component (called a client of that object) can then invoke methods on the object (called the server in this instance).

In CORBA, a client is simply any application that uses the services of a CORBA object; that is, an application that invokes a method or methods on other objects. Likewise, a server is an application that creates CORBA objects and makes the services provided by those objects available to other applications. A much more detailed discussion of CORBA clients and servers is presented on Day 2.

As mentioned previously, CORBA ORBs usually communicate using the Internet Inter-ORB Protocol (IIOP). Other protocols for inter-ORB communication exist, but IIOP is fast becoming the most popular, first of all because it is the standard, and second because of the popularity of TCP/IP (the networking protocols used by the Internet), a layer that IIOP sits on top of. CORBA is independent of networking protocols, however, and could (at least theoretically) run over any type of network protocols. For example, there are also implementations of CORBA that run over DCE rather than over TCP/IP, and there is also interest in running CORBA over ATM and SS7.

The CORBA Object Model

In CORBA, all communication between objects is done through object references (again, these are known as Interoperable Object References, or IORs, if you're using IIOP). Furthermore, visibility to objects is provided only through passing references to those objects; objects cannot be passed by value (at least in the current specification of CORBA). In other words, remote objects in CORBA remain remote; there is currently no way for an object to move or copy itself to another location. (You'll explore this and other CORBA limitations and design issues on Day 10.)

Another aspect of the CORBA object model is the Basic Object Adapter (BOA), a concept that you'll also explore on Day 2. A BOA basically provides the common services available to all CORBA objects.

CORBA Clients and Servers

Like the client/server architectures, CORBA maintains the notions of clients and servers. In CORBA, a component can act as both a client and as a server. Essentially, a component is considered a server if it contains CORBA objects whose services are accessible to other objects. Likewise, a component is considered a client if it accesses services from some other CORBA object. Of course, a component can simultaneously provide and use various services, and so a component can be considered a client or a server, depending on the scenario in question.

Stubs and Skeletons

When implementing CORBA application components, you will encounter what are known as client stubs and server skeletons. A client stub is a small piece of code that allows a client component to access a server component. This piece of code is compiled along with the client portion of the application. Similarly, server skeletons are pieces of code that you "fill in" when you implement a server. You don't need to write the client stubs and server skeletons themselves; these pieces of code are generated when you compile IDL interface definitions. Again, you'll soon see all this firsthand.

Beyond the Basics: CORBAservices and CORBAfacilities

In addition to the CORBA basics of allowing objects to communicate with each other, recall that the OMA--of which CORBA is a part--also provides additional capabilities in the form of CORBAservices and CORBAfacilities. As you'll find out, CORBAservices and CORBAfacilities provide both horizontal (generally useful to all industries) and vertical (designed for specific industries) services and facilities. You'll look at the capabilities provided in greater detail on Day 12, after which you'll get the opportunity to use some of this functionality in a CORBA application.


Today you had a very brief overview of the CORBA architecture, along with a history of business application development and where CORBA fits in. You now know what you can expect to get out of this book--you won't become a CORBA expert overnight, but you will gain valuable exposure to the process of designing and developing CORBA-based applications.

In the next few days, you'll explore the CORBA architecture in much greater detail, learn more than you ever wanted to know about IDL, and you'll be well on your way to developing CORBA applications.


Q I'm still not very clear on why I would want to use CORBA as opposed to some other method of interprocess communication.

There are a few areas where CORBA really shines. For applications that have various components written in different languages and/or need to run on different platforms, CORBA can make a lot of sense. CORBA takes care of some potentially messy details for you, such as automatically converting (through the marshaling process) number formats between different machines. In addition, CORBA provides an easily understood abstraction of distributed applications, consisting of object-oriented design, an exception model, and other useful concepts. But where CORBA is truly valuable is in applications used throughout an enterprise. CORBA's many robust features--as well as those provided by the OMA CORBAservices and CORBAfacilities--and especially CORBA's scalability, make it well suited for enterprise applications.

Q What is IDL and why is it useful?

IDL, or Interface Definition Language, will be covered in greater detail over the next two Days. For now, it is useful to understand that the value in IDL comes from its abstraction of various language, hardware, and operating system architectures. For example, the IDL long type will automatically be translated to the numeric type appropriate for whatever architecture the application is run on. In addition, because IDL is language-independent, it can be used to define interfaces for objects that are implemented in any language.


The following section will help you test your comprehension of the material presented today and put what you've learned into practice. You'll find the answers to the quiz in Appendix A. On most days, a few exercises will accompany the quiz; today, because no real "working knowledge" material was presented, there are no exercises.


1. What does IIOP stand for and what is its significance?
2. What is the relationship between CORBA, OMA, and OMG?
3. What is a client stub?
4. What is an object reference? An IOR?

Previous chapterNext chapterContents

© Copyright, Macmillan Computer Publishing. All rights reserved.