Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Chapter 16 SBDED0105..


Published on

  • Be the first to comment

  • Be the first to like this

Chapter 16 SBDED0105..

  1. 1. DRAFT CHAPTER 162 H. Bruce Bongiorni SIMULATION-BASED DESIGN 12.1 NOMENCLATURE behaviors of the product in response to -- to be developed -- environmental inputs. 12.2 WHAT IS SIMULATION-BASED There is a lot of discussion surrounding the DESIGN? product model, the 3D product model, the product information model, and the "smart" product model. Simulation-based Design (SBD) is the The 3D product model is essentially the 3 program name given to a DARPA project to dimensional geometry of the product andwhere the develop an integrated design environment (see product information model usually refers to the set Reference 1). A primary goal of this project is to of non-geometric data related to the product. make it practical to create "virtual prototypes" and People use "product model" to refer to either the test them in "synthetic environments". By doing 3D product model, the product information model, so, the designer could make decisions tradeoffs or both. For the purpose of our discussion, the while getting instantaneous feedback on the product model refers to the superset of data consequences of those changes. composed by the union of the 3D product model and the product information model. 12.2.1 Virtual Prototypes: the Smart Product During the DARPA SBD project the Model notion of the "smart" product model emerged. This was the product model expanded to include The first notion of a virtual prototype was product behaviors. So for the purpose of this the development of digital mockups in lieu of discussion, the "smart" product model is a virtual physical mockups. These were and are prototype. visualizations of the product geometry based on features, dimensions, and spatial relations taken from the CAD representation. These 12.2.2 Synthetic Environments: Simulations representations are typically sparse in that they represent a subset of much of the information that Models are the sets of instructions, is contained or generated by a CAD system. This constraints, relationships, and data that describe is a result of limitations in the speed of rendering the way that a product will respond to the inputs the images and constraints in handling the large from the environment. But a model is a static amounts of data required. entity, in that there is no time element in the Another type of virtual prototype is one in model. which the "behaviors" of the product are A simulation, on the other hand, is the represented. An example of a behavior is the instantiating of the model in a time domain. That structural response of a physical object to a given is, when one set of inputs are given to the model, load. As there are many behaviors that are the model responds to those inputs. That set of considered by a designer, there are as many inputs is a single instance in time. Multiple inputs different models that are used. For a ship example, are discrete steps in the time domain. for the structural response of a ship’s structure Each version of the product model has a there is the finite element model. But there can corresponding set of inputs. A synthetic also be a computational fluid dynamics model, a environment is then the superset of all of the input radar signature model, a seakeeping model, a sets for the virtual prototype (or smart product model of the cargo handling systems, a fluid model). The synthetic environment can be defined system model, an electrical load model, and so on. as: (see Reference 2) The logical representation of the input The virtual prototype is then defined as: sets and data which elicit the behaviors of the product over time. The logical representation of the digital models and data which describe the HBB/UMTRI-MSD Page 1 DRAFT
  2. 2. DRAFT Summary [Like to have this at end of section or products such as Zerox or Apple and the PC. chapter. Or use a different word] (Apple?). These are key concepts used in the following So imagine an example where Company A is discussions and seem deceptively simple. As I investing $5,000,000 of current year money over 5 have said previously, the virtual prototype is the years to create a new product. Assume that the set of all models that describe the behaviors of the market for that product is expected to be about product, and thesynthetic environment is the set of $20,000,000 per year. Other assumptions are that all inputs for simulation. cost of capital is 10% and that you can expect to So for simulation-based design all we have to do is have a 50% share of the market when your product build models for all of the behaviors we want to launches. use as a basis for the design, and test those models Now imagine Company B has a 1 year lag behind simultaneously by running all of the simulations at Company A and now will spend the same effort to the same time. Right? develop competing product, but must do so in the same time deadline as their competitor. That is, 12.3 WHY THE INTEREST IN Company B must develop its version of the SIMULATION-BASED DESIGN product in 4 years. (SBD)? But, it is not good enough to just have your product in the market. Company B wants to have a 12.3.1 Introduction better product order to gain market share from its competitor. Being late to start development The race is to the swift actually helps this situation and can be a powerful strategy. By being late or delaying the decision to The new economic realities are shifting develop a product, Company B can avoid some of the focus from low cost, low quality commodity the costs from dead ends, and/or take advantage of products to best value custom products. This new technologies or changes in the market makes the ability to conceive, design, and produce forecast. So in this example, we'll assume that a new, quality product quickly an important Company B actually gets 51% of the market business survival strategy. because its product better meets the needs the Introduction market. It is essential toWe are starting this class by looking at the business drivers for SBD-related technologies. The interest in integrated design technologies is basically driven by the need to create new products. There are two fundamental business strategy scenarios: being late to a market with a new product, or being first to the market with a new product. Scenario 1: We need one of those too! This is the situation where a business's competitor is investing in the development of a new product and may be the first to have it in the market. The first one there gains market share, economies of scale, etc. So typically management will have some competitor intelligence and start up its own product development program. Being late to develop a product is not always a bad thing. In some cases, the business that blazes the trail spends a large part of its time and resources going down dead ends. There are also many examples of cases where the innovator ultimately did not capitalize on its ability to create new HBB/UMTRI-MSD Page 2 DRAFT
  3. 3. The cash flows are shown below: And the cost comparison is shown in the table below: Scenario 1: late start, 1% better market share for Company B Company A Market share 49% Cost of Capital 10% Expense $ (5,000,000) Revenue $ 39,522,634 Profit $ 34,522,634 Company B Market Share 51% Cost of Capital 10% Expense $ (5,000,000) Revenue $ 41,135,803 Profit $ 36,135,803 Difference btw B and A $ 1,613,169 Scenario 2: We need to be first! The second generalized case is where a company wants to be the first in a market place with a new product. In this situation, the sooner that the business can develop the product the sooner the benefits of the revenue will accrue. Looking at the same conditions as in the first scenario, the second scenario cash flow looks like that below:
  4. 4. We assume a couple of things in scenario 2. First we assume that the product development process for Company B spends the same amount of resources over a shorter period of time than its competitor. Another assumption is that there is no difference in the quality of the products and that the market is equally shared. Consequently, the benefit to company B is advancing the cash flow. The results are shown in the table below: Scenario 2: early finish, no change in market share for Company B Company A Market share 50% Cost of Capital 10% Expense $ (5,000,000) Revenue $ 40,329,219 Profit $ 35,329,219 Company B Market Share 50% Cost of Capital 10% Expense $ (5,000,000) Revenue $ 44,362,141 Profit $ 39,362,141 Difference btw B and A $ 4,032,922 Conclusions from the examples If you compare the examples above, what should be clear is that in neither of the cases does a Company B reduce engineering cost as compared to Company A. The benefit accrues to Company B by the ability to gain market share and revenues, not by reducing the product development expenditure. That's not to say that a business should not be
  5. 5. concerned with product development costs. What it does say is that, all things being equal, a business gains the most benefit from its ability to develop products that allow it to gain market share and, consequently, revenue as quickly as possible. Businesses that think this way are excited about technologies like SBD. SBD technologies can be expensive, SBD processes can be more expensive than traditional design processes. But because so many more alternatives can be considered and tested in a given period of time, SBD changes the product development process so that higher quality products can be developed in a shorter time. The case studies below are examples of the application of SBD processes and technologies. Case Study: Chrysler In the auto industry, the key to survival, let alone dominance, is in the effectiveness of an auto maker's design process. In the 1970's and 80's production processes and quality were improved. In the 90's, auto makers have rethinking their design processes. One of the best examples of this is Chrysler. In 1988, Chrysler recognized that it needed to replace its K-car line with a new model. Chrysler management looked at their competition, in particular Honda and Toyota. At that time one study conducted by the Harvard Business School estimated that the average Japanese auto company spent 1.7 million engineering hours in 4 years to launch a new model. In contrast, American and European auto makers spent 3 million engineering hours and 5 years to accomplish the same project. Chrysler has had more than one near death experience, and in 1988, was facing another possible threat to its viability. In response, Chrysler management committed $1.6 billion dollars to developing a new product line. They also committed to revamping the way they did design. By doing so, Chrysler launched its new line in 3.5 years. This line of cars is now selling very well, Chrysler not only is surviving, but thriving. What did Chrysler do? Among the things that they changed were: Platform Teams - Chrysler organized the design process around the product. It formed cross-functional groups of engineers who functioned as an autonomous business unit. The teams not only included the designers, but also people from materials and manufacturing. Digital mockups -Chrysler has used the digital geometry from their CAD systems to review and evaluate styling decisions and manufacturing processes. Centralized CAD database - This shared information allowed everyone working on the design, including the major suppliers, to have the same reference information. Variation simulation - Chrysler engineers simulated the stack-up of tolerances in order to determine the fit of body panels. This simulation also allowed establishment of tolerances to account for spring-back during manufacturing. Structure modeling and simulation - Cab-forward design made structural analysis critical to the development of Chrysler's new car designs. In addition, simulation of performance reduced the need to develop prototype vehicles and physical testing.
  6. 6. Case Study: Boeing The Boeing 777 is considered a watershed in the use of simulation to reduce construction costs, and concept-to-delivery time. This ability allowed Boeing to delay committing to the design of its new aircraft until its competitors had already done so. By shortening the design time, Boeing was able to better meet needs of their customers with a product that better met the customer's needs in the market place at about the same time as Airbus Industrie. In 1986, Airbus and McDonnell Douglas were beginning to develop new planes to meet the market for medium range, wide-body airliners. Boeing was caught with nothing in their product line to match the planes their competitors were developing. The McDonnell Douglas MD-11 and its variation, the MD-12, were scheduled for delivery in 1990. The Airbus A330 and its A340 variation were expected to be delivered in 1993. Boeing's product development cycle was at least 6 years, McDonnell Douglas was about 4 years for the MD-11, and Airbus was on a 7 years. At the time that Boeing finally committed to developing the 777, they were 5 years behind their competition. Boeing did a number of things to insure that the 777 would gain market share over its competitors. Among them were: Early involvement of the customers - Before Boeing committed to design features, the first thing they did was talk to their customers. They spent over a year meeting with the 8 major airlines and discussing the things that they needed. What Boeing learned became the features to be incorporated into the 777 and the constraints for the design. Digital mockups - Boeing typically built 3 sets of full-scale mockups of a new design. The first checked the basic geometry and arrangements, the second incorporated the changes from the first mockup, electrical wiring, and piping systems, the third incorporated the discoveries of the second mockup. Instead of the physical mockups, Boeing decided to use 3 dimensional digital models to coordinate the design of the aircraft systems. Collaborative design - Boeing integrated the designers and the builders into a design- build team that forced communication and negotiation between disciplines and organizations that, prior to the 777, had never had direct contact. Case Study: DD-21, 21st Century Destroyer The marine industry has started to adopt some of the technologies and practices that are becoming common place in commercial industries (Session Readings 6). Much of the interest by the Department of Defense has been due to cutbacks in appropriations, the increasing cost of new acquisitions, and the complexity of new systems. The DD-21 program is an effort to design and build the Navy's next generation surface combatant vessel. The contract for initial design has recently been let to two consortiums: the first is Bath Iron Works and Lockheed Martin, the second is Ingalls Shipbuilding and Raytheon. Integral to the design process, the Navy has required the extensive use of modeling and simulation in the course of design and evaluation of alternatives. The requirements include (Session Reading 9):
  7. 7. Product Model - NAVSEA currently uses the Integrated Ship Design Program (ISDP) software for product model definition provided by the NAVSEA CAD2 contract. The long term goal for SC 21 simulation-based acquisition (SBA) will be the inclusion of physics-based behavioral objects being developed by DARPA. The incorporation of behaviors into the product model results in a "smart" product model. Physics-based Analysis Programs - Some of the SC 21 Office's analysis needs (15-20%) can be met by the adaptation of commercial-off-the-shelf (COTS) analysis programs developed for general engineering use (e.g., structural finite element, pipe network, and power distribution systems analysis). The majority of needs are unique to ship design or warship design (e.g., seakeeping, survivability). For these areas the SC 21 Office will depend upon software developed by NAVSEA, the Navy or other defense activities. Behavioral Models - Behavior models capture extensive analytical calculations as parametric equations, as in ship maneuvering coefficients and missile flight characteristics. Visualization - This capability allows "virtual mockups" to be toured and spatial relationships to be visualized to support the functions of design review and evaluation by managers, production staff, and fleet operators. Simulations - Simulations are the combination of visualization with realistic behaviors. The SC 21 Office will rely heavily on other Navy and defense activities, industry, and academia for identification and integration of required simulation models. Product Models We talk about the the 3d product model as if it were something new and improved over two dimensional drawings. The truth be known, 2d drawings of a 3d object were considered to be a major technological advance. In fact, the British Admiralty required that the ship builder submit a scale model of the ship proposed, which showed the arrangements and structural details of the ship. This resulted in some beautifully crafted models that are now on display in the Royal Maritime Museum in London. Ultimately, this practice was displaced by the use of orthographic drawings. So it is a bit ironic that we are now in an age where we can deliver 3d representations of a ship design that can replace drawings. The Logical Product Model We hear about the product model and immediately associate the term with a digital three dimensional graphic representation of a product's geometry. We also think of the data for that model as if it were in one place somewhere in the computer, maybe on a hard drive or a disk. This is the "logical" product model. Put another way, the logical product model is the way we think about the model as if it were one integrated set of data. The Physical Product Model The "physical" product model may be something entirely different from the logical model. Product model data may not be in one computer or storage device, but may be on more than one computer, and in various physical locations. This is made possible by the
  8. 8. changes in networking and computer technologies. Distributed computing methods allow for the distribution of information around a network and for that information to be accessed by a user as if it were a single integrated database. Introduction (Reference 1) CAD has changed significantly over time as computer hardware and software have become more powerful and less expensive. The diagram below shows roughly the evolution from CAD as a drafting tool to CAD as a product modeling tool. First applications were in computer aided engineering (CAE) applications such as finite element analysis (FEA) or numerical simulations of processes. A later application was the guidance of numerically controlled (NC) machines such as burners. The difficulty in developing models and checking results led to preprocessor applications to aid in entering information into CAE applications. Similarly, the same approach to programming NC machines was used for generation 2d drawings. Finally, there has been a merging of these applications into integrated 3d geometry modeling applications. 3d CAD systems are what we think of when we talk about the product model, but this has been a long evolution and tied to other uses of the information entered in digital form when design is done. Where the original application of CAD was in replacing the use of pencil on paper with better drawings, it has evolved into a tool for creating mathematical representations of three-dimensional geometry. (Reference 2) 3d Geometric Models (References 2 and 3) The first model we think of as representing the product is the 3d geometric model. There are three basic approaches to representing the geometry of an object. These are described below.
  9. 9. Wire Modeling The graphic below shows how a wire frame geometric model works. Wire frame modeling is the extension into three dimensions of the same line-and-point definitions that is done in two dimensions. That is, points are defined according to a 3d coordinate system, and lines are then defined by their end points. Wire frame representations can lead to confusion because, visually, it is difficult to determine the front of the object from the back. Surface Modeling A surface model is more complete than a wire frame model and can be used to represent an object accurately and realistically. It also allows for the use of hidden-line algorithms to make visualization easier.
  10. 10. There are three basic methods for surface modeling: Extruded lines and curves - This method takes a line or a curve and sweeps it according to a given path. Polygon faces - This method builds complex surfaces out of simple triangular or quadrilateral surfaces. Elemental surfaces are defined by a series of points listed in a clockwise or counterclockwise sequence, and the elemental polygons are joined by specifying common points with other elemental polygons. Non-uniform rational B-splines (NURBS) - NURBS is a numerical method for representing a surface using blended equations to interpolate values between points that bound a surface (Session Reading 1). Solid Modeling Solid modeling provides the most complete representation of a physical object and often includes information about the material properties in addition to the geometry. Essentially, a solid model is a 3d representation such as a wireframe or surface model, but with the notion of an inside or an outside. This is usually defined by specifying a vector normal to the surfaces defining the volume of the object.
  11. 11. Solid Modeling requires a lot of computation to handle manipulation and visualization. A number of different methods are used, the most common being: primitive instancing - This method is used to describe specialized situations where the objects being defined have only a few topological configurations. For example, an "I" beam can be represented as a combination of cube variations cell decomposition - This procedure starts with a complete solid object and decomposes the space it occupies into small cells. The accuracy of this method depends on the cell sizes, the smaller the cells, the more accurate the representation. sweep representations -This method constructs a solid by sweeping an area through a path. A straight line path would generate an extruded solid object, and a circular path generates a solid of revolution. constructive solid geometry (CSG) - In the CSG method, objects are built up starting with simple primitives (such as a box, wedge, cylinder, cone, or sphere) then combining them using Boolean operations (union, subtraction, intersection). For example, if a sphere
  12. 12. primitive is combined with a cylinder using a subtraction operation would yield a sphere with a cylindrical hole. An object is described by the tree of operations and primitives. boundary representation (B-rep) - A solid can be described by defining the boundaries of the object. For example, an object can be drawn as a wire frame where the edge segments represent the joining of the surfaces, and the corners as the edges joined at vertices. The methods described above are not mutually exclusive, and most major commercial modeling applications use combinations of methods. For example, AutoCAD Advanced Modeling Extension (AME) uses CSG, B-rep, and sweep representations. Solid modeling is becoming more common in ship design applications of CAD (Session Reading 2). This is because solid modeling is a more complete representation of the physical features and properties of physical objects. Engineering Models So far we have only looked at the basic geometric models for a product. But the definition of the product model includes not just the geometry but other data and information about the product. During the design and engineering process, there are a number of models built based in some way on the underlying geometry of the product. These engineering models represent the behaviors or responses of the product to its functional environment. Most of digital engineering models are keyed to, depend on, or are derived from the geometry model of the product. During the design cycle, there may be a number of different models developed (e.g. finite element, computation fluid dynamics, radar cross section, etc.) in support of the particular behavior being analyzed and the software application being used. One of the problems in developing the product model is the integration of these different physical models into a single logical product model. DT_NURBS (Reference 4) DT_NURBS is an approach to integrating the different engineering models and results into a common form that makes it possible to share engineering information. DT_NURBS is a library of FORTRAN and C++ routines that map the model and results surfaces onto the CAD geometry model. Underlying the DT_NURBS code are algorithms that essentially extend NURBS surfaces from three dimensions into n dimensions, where the additional dimensions include physical attributes and behaviors. The basic theory underlying DT_NURBS is described in general in Session Reading 3, and detailed in the DT_NURBS Theory Manual. More than geometry About 20 years ago, people in the aerospace, automobile, and shipbuilding industries (Session Reading 4) began thinking about Computer Integrated Manufacturing (CIM). Their view of CIM was that there was a central place where all of the "islands of automation" in the factory were storing and using the same data about the products that were being made. The role of graphics or CAD data was seen as a subset of the overall information about the product.
  13. 13. In fact, the graphic representation of the product geometry is a subset of the overall information, and is actually the mechanism by which information is queried and related. For example, if you look at a drawing, there are a number of "call outs" that reference different tables of information such as the materials list or a specification document. In this way, the geometric data is the key to accessing all of the information about the product (Session Reading 5). Further, as the technologies become more sophisticated, knowledge about the product can actually be generated on rules of association. From this emerges the "smart" product model (Session Reading 6) Product Model Data Exchange (Reference 4) Up to this point, I've talked about the methods for developing product geometry and a little bit about other product information such as engineering behavior, analysis results, and manufacturing information (work instructions, process plans, NC instructions, etc.). At this point we need to think about the sequence in which this information is actually accumulated into a digital form. During conceptual design, information may be in the form of 2d sketches, spreadsheet calculations, and tabular data. As the project moves to the contractual design phase, engineering models may be added with analysis results, possibly 3d models of the product or components of the product. As the project moves to detailed design, more information is added. These may be detailed engineering models and analysis, manufacturing instructions, and so on. The level of detail of the information describing the product changes from being very general and ambiguous to being quite specific and extensive. More importantly, much of this information may be captured or generated in a digital form but transferred to a downstream user via a paper document. Each of these transactions has a cost associated with time and quality of the transaction. Think about the process of developing a drawing: much of that process is actually transcribing or interpreting information from other sources into a form appropriate for the current user (Session Reading 7) The promise of information technologies is that this entire set of information describing the product can be captured as it is developed and transferred to the next user of that information. This is a transactional view of the way that information is used and transferred. The issues surrounding this view are one of converting product information between different applications and operating systems. One answer to this problem is the standard exchange of product model data commonly abbreviated as STEP (Session Reading 8) Another view of the opportunities that emerging information technologies can provide is the idea that the information is not transferred but shared. The product information is not transferred but used in situ where it was created, and with the attendant responsibilities residing there as well. We will address this view later in the course. The Network The basic infrastructure for communication between computers is the network. This has become a pervasive technology which is intended to be transparent to those who use it.
  14. 14. Speeds and capacity continue to increase from 10 Mbs Ethernet to 100Mbs Ethernet to Gigabit Ethernet and ATM for both LAN and WAN applications. The Extended Computing Environment An individual working alone can accomplish some things, but a group of people working together can accomplish exponentially more. The network can extend the distribution of work, as well as information, beyond an individual's computer to those of a department, a company, or outside of the enterprise. Networks Sun Microsystems has adopted the phrase "The network is the computer as their vision for the future. If we think about it, this is a very natural extension of how we work, or at least is a natural evolution of our working society. Much of design is the process of transcribing information from various sources into a format that is usable by others downstream. The information is acquired via a network of relationships and through transmission of messages (phone, fax, face-to-face meetings, memos, letters, transmittals, etc.). So if you think of the computer as a communication device, then the network is the computer. Introduction For the most part, naval architects and mechanical engineers learn very little about networks and network technologies. So the first part of this session will be definitions and explanations. I have lifted most of this material from Reference 1, putting it into a reasonable sequence. Toward the end of this lecture, I introduce the reason "the network is the computer". This is the idea of a mixed platform, operating system, network operating system as a larger business environment. The vision is to have transparent access to information and resources both inside the enterprise and across enterprise boundaries. (Session Reading 1) Inside the enterprise: the Local Area Network (LAN) To start with, a LAN is a communications network that serves users within a confined geographical area. It is made up of servers, workstations, a network operating system and a communications link. Servers are computers that hold programs and data that is shared by users connected on the network. The clients are the users' personal computers or workstations. These perform stand-alone processing and access the network servers as required. Servers can also provide access to other devices such as a printer or storage (such as a redundant array of inexpensive drives, a RAID). Increasingly, "thin client" network computers (NCs, think of WebTV) and stripped down PCs (PCs with limited hard disk space or RAM) are also being used where all applications and storage is on a server somewhere on the network. Small LANs can allow certain workstations to function as a server, allowing users access to data on another user's machine. These peer-to-peer networks are often simpler to install
  15. 15. and manage, but dedicated servers provide better performance and can handle higher transaction volume. Multiple servers are used in large networks. The controlling software in a LAN is the network operating system (NetWare, UNIX, Windows NT, etc.) that resides in the server. A component part of the software resides in each client and allows the application to read and write data from the server as if it were on the local machine. The message transfer is managed by a transport protocol such as TCP/IP and IPX. The physical transmission of data is performed by the access method (Ethernet, Token Ring, etc.) which is implemented in the network adapters that are plugged into the machines. The actual communications path is the cable (twisted pair, coax, optical fiber) that interconnects each network adapter. Clients and Servers in a LAN This illustration shows one server for each type of service on a LAN. In practice, several functions can be combined in one machine and, for large volumes, multiple machines can be used to balance the traffic for the same service. For example, a large Internet Web site is often comprised of several computer systems (servers). Illustration of a local area network (LAN) (from the Technology Encyclopedia) The Software in a Network Client This illustration shows the various software components that reside in a user's client workstation in a network. Note that there are different layers of software: the network operating system which handles the communication with the network, the operating systems which handles the processor's resources and platform services, and application programs which provide the user interface and do the work. Illustration of the software in a network client platform (from the Technology Encyclopedia) The Software in a Network Server The graphic below shows the typical software components that reside on a server machine on the network. Not shown are applications for managing the network or shared applications resident on the server.
  16. 16. Illustration of software on a netwrok server platform (from the Technology Encyclopedia) Outside the enterprise, the Wide Area Network A wide area network (WAN) is a communications network that covers a wide geographic area, such as state or country. A LAN (local area network) is contained within a building or complex, and a MAN (metropolitan area network) generally covers a city or suburb. Network Protocols Network protocols are communications protocol used by the network. These protocols are defined as layers to allow for modular changes as technology changes. International Standards Organization (ISO) has defined a standard (referred to as the OSI model) for worldwide communications that defines a framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy. At one time, most vendors agreed to support OSI in one form or another, but OSI was too loosely defined and proprietary standards were too entrenched. Except for the OSI- compliant X.400 and X.500 e-mail and directory standards, which are widely used, what was once thought to become the universal communications standard now serves as the teaching model for all other protocols. Most of the functionality in the OSI model exists in all communications systems, although two or three OSI layers may be incorporated into one. Below is an illustration of the OSI layers: OSI Communiation Layers (from the Technology Encyclopedia) Application - Layer 7 This top layer defines the language and syntax that programs use to communicate with other programs. The application layer represents the purpose of communicating in the first place. For example, a program in a client workstation uses commands to request data from a program in the server. Common functions at this layer are opening, closing, reading and writing files, transferring files and e-mail messages, executing remote jobs and obtaining directory information about network resources. Presentation - Layer 6 When data is transmitted between different types of computer systems, the presentation layer negotiates and manages the way data is represented and encoded. For example, it
  17. 17. provides a common denominator between ASCII and EBCDIC machines as well as between different floating point and binary formats. Sun's XDR and OSI's ASN.1 are two protocols used for this purpose. This layer is also used for encryption and decryption. Session - Layer 5 This layer provides coordination of the communications in an orderly manner. It determines one-way or two-way communications and manages the dialogue between both parties; for example, making sure that the previous request has been fulfilled before the next one is sent. It also marks significant parts of the transmitted data with checkpoints to allow for fast recovery in the event of a connection failure. In practice, this layer is often not used or services within this layer are sometimes incorporated into the transport layer. Transport - Layer 4 The transport layer is responsible for overall end to end validity and integrity of the transmission. The lower data link layer (layer 2) is only responsible for delivering packets from one node to another. Thus, if a packet gets lost in a router somewhere in the enterprise intranet, the transport layer will detect that. It ensures that if a 12MB file is sent, the full 12MB is received. "OSI transport services" include layers 1 through 4, collectively responsible for delivering a complete message or file from sending to receiving station without error. Network - Layer 3 The network layer establishes the route between the sending and receiving stations. The node-to-node function of the data link layer (layer 2) is extended across the entire internetwork (network of networks), because a routable protocol contains a network address in addition to a station address. This layer is the switching function of the dial-up telephone system, as well as the functions performed by routable protocols such as IP, IPX, SNA and AppleTalk. If all stations are contained within a single network segment, then the routing capability in this layer is not required. Data Link - Layer 2 The data link is responsible for node to node validity and integrity of the transmission. The transmitted bits are divided into frames; for example, an Ethernet, Token Ring or FDDI frame in local area networks (LANs). Layers 1 and 2 are required for every type of communications. Physical - Layer 1 The physical layer is responsible for passing bits onto and receiving them from the connecting medium. This layer has no understanding of the meaning of the bits, but deals with the electrical and mechanical characteristics of the signals and signaling methods.
  18. 18. For example, it comprises the RTS and CTS signals in an RS-232 environment, as well as TDM and FDM techniques for multiplexing data on a line. Ethernet (Session Reading 2 and 3) (Reference 2) Ethernet is a LAN technology developed by Xerox, Digital and Intel (IEEE 802.3). It is the most widely used LAN access method. Token Ring is next. Ethernet is normally a shared media LAN. All stations on the segment share the total bandwidth, which is either 10 Mbps (Ethernet), 100 Mbps (Fast Ethernet) or 1000 Mbps (Gigabit Ethernet). With switched Ethernet, each sender and receiver pair have the full bandwidth. Ethernet breaks up all data being transmitted into variable length frames from 72 to 1526 bytes in length, each containing the addresses of the source and destination stations as well as error correction data. It uses the CSMA/CD technology to broadcast each frame onto the physical medium (wire, fiber, etc.). All stations attached to the Ethernet are "listening," and the station with the matching destination address accepts the frame and checks for errors. Ethernet is a data link protocol (MAC layer protocol) and functions at layers 1 and 2 of the OSI model. Asynchronous Transfer Mode (ATM) (Session Reading 4) (Reference 2) ATM is a network technology for both LANs and WANs that supports realtime voice and video as well as data. The topology uses switches that establish a circuit from input to output port and maintain that connection for the duration of the transmission. This connection-oriented technique is similar to the analog telephone system. ATM is scalable and supports transmission speeds of 25, 100, 155, 622 and 2488 Mbps. ATM works by chopping all traffic into 53-byte packets, or cells. This fixed-length unit allows very fast switches to be built, because the processing associated with variable- length packets is eliminated (finding the end of the frame). The small ATM packet also ensures that voice and video can be inserted into the stream often enough for realtime transmission. The ability to specify a quality of service is one of ATM's most important features, allowing voice and video to be transmitted smoothly. Constant Bit Rate (CBR) guarantees bandwidth for realtime voice and video. Variable Bit Rate (VBR) is used for compressed video and LAN traffic. Available Bit Rate (ABR) adjusts bandwidth for bursty LAN traffic. Unspecified Bit Rate (UBR) provides no guarantee. Network applications use protocols, such as TCP/IP, IPX, AppleTalk and DECnet, and there are tens of millions of Ethernet, Token Ring and FDDI client stations in existence. ATM has to coexist with these legacy protocols and networks. (Session Reading 5) LAN Emulation (LANE), defined by the ATM Forum, interconnects legacy LANs by encapsulating packets into LANE packets and then converting them into ATM cells. It supports existing protocols without changes to Ethernet and Token Ring clients, but uses traditional routers for internetworking between LAN segments. LAN Emulation does not provide ATM quality of service. There are techniques (such as MPOA or CIF) that do provide ATM quality of service
  19. 19. When ATM came on the scene, it was thought to be the beginning of a new era in networking, because it was both a LAN and WAN technology that could start at the desktop and go straight through to the remote office. This scenario is not evolving due to the high costs of conversion. In addition, huge numbers of Ethernets and Token Rings are already in place, and higher-speed versions of these technologies provide a simpler migration path. ATM's use as a LAN technology has been limited to demanding applications only. However, ATM is indeed establishing itself as an important backbone technology in large organizations, common carriers and Internet providers. The graphic below provides a comparison of the different networking alternatives. Summary of network bandwiths (from the Technology Encyclopedia) TCP/IP Transmission Control Protocol/Internet Protocol) is a communications protocol developed under contract from the U.S. Department of Defense to inter-network dissimilar systems. It is a de facto UNIX standard that is the protocol of the Internet and widely supported on all platforms. The TCP part of TCP/IP provides transport functions, which ensures that the total amount of bytes sent is received correctly at the other end. UDP is an alternate transport that does not guarantee delivery. It is widely used for real-time voice and video transmissions where erroneous packets are not retransmitted. The IP part of TCP/IP provides the routing mechanism. TCP/IP is a routable protocol, which means that the messages transmitted contain the address of a destination network as well as a destination station. This allows TCP/IP messages to be sent to multiple networks within an organization or around the world, hence its use in the worldwide Internet (see Internet address). TCP/IP uses a sliding window transmission method which maximizes speed and also adjusts to slower circuits and delays in the route. TCP/IP packets use a logical address of the destination station rather than a physical address. This logical IP address, which also includes the network address, is dynamically mapped to a physical station address (Ethernet, Token Ring, ATM, etc.) at runtime. TCP/IP includes a file transfer capability called FTP, or File Transfer Protocol. This function allows files to be downloaded and uploaded between TCP/IP sites. SMTP, or Simple Mail Transfer Protocol, is TCP/IP's own messaging system for electronic mail, and the Telnet protocol provides terminal emulation. This allows a personal computer or workstation to emulate a variety of terminals connected to mainframes and midrange computers.
  20. 20. The combination of TCP/IP, NFS and NIS comprise the primary networking components of the UNIX operating system. The following chart compares the TCP/IP layers with the OSI model. TCP/IP map to the OSI model (from the Technology Encyclopedia) IP Address Internet Protocol (IP) address is the physical address of a computer attached to a TCP/IP network. Every client and server station must have a unique IP address. Client workstations have either a permanent address or one that is dynamically assigned to them each dial-up session. IP addresses are written as four sets of numbers separated by periods; for example, The TCP/IP packet uses 32 bits to contain the IP address, which is made up of a network and host address (netid and hostid). The more bits used for network address, the fewer remain for hosts. Certain high-order bits identify class types and some numbers are reserved. The following table shows how the bits are divided. The Class Number is the decimal value of the high-order eight bits, which identifies the class type. Class Class Maximum Maximum Bits in Net Bits in Host Number Networks Hosts ID ID A 1-127 127 16,777,214 7 24 B 129-191 16,383 65,534 14 16 C 192-223 2,097,151 254 21 8 Class C addresses have been expanded using the CIDR addressing scheme, which uses a variable network ID instead of the fixed numbers shown above. Network addresses are supplied to organizations by the InterNIC Registration Service. LAN/WAN Hardware A router is a device that routes data packets from one local area network (LAN) or wide area network (WAN) to another. Routers see the network as network addresses and all the possible paths between them. They read the network address in each transmitted frame and make a decision on how to send it based on the most expedient route (traffic load, line costs, speed, bad lines, etc.). Routers work at the network layer (OSI layer 3), whereas bridges and switches work at the data link layer (layer 2). As well as performing actual routing and path determination, routers are also used for such functions as segmenting LANs to balance traffic, filtering traffic for security
  21. 21. purposes and controlling broadcast storms. Multiprotocol routers support several protocols such as IPX, TCP/IP and DECnet. Routers often serve as an intranet backbone, interconnecting all networks in the enterprise. This architecture strings several routers together via a LAN topology such as FDDI. Another approach uses a router with a high-speed backplane known as a collapsed backbone. The collapsed backbone router, which connects more subnetworks in one device, makes network management simpler. The substitution of a fast backplane instead of an external LAN topology improves performance. Routers can only route a message that is transmitted by a routable protocol such as IPX and IP. Messages in non-routable protocols, such as NetBIOS and LAT, cannot be routed, but they can be transferred from LAN to LAN via a bridge. Because routers have to inspect the network address in the protocol, they do more processing and add more overhead than a bridge or switch, which both work at the data link (MAC) layer. Most routers are specialized computers that are optimized for communications; however, router functions can also be implemented by adding routing software to a file server. NetWare, for example, includes routing software. The NetWare operating system can route from one subnetwork to another if each one is connected to its own network adapter (NIC) in the server. The major router vendors are Cisco Systems and Bay Networks. Although widely deployed and an essential component in the worldwide Internet and many enterprises, routers are complex and costly devices that add considerable overhead to the transmission of data. Routers work with connectionless networks, in which each frame is inspected and then forwarded. Routing will always be necessary; however, the traditional router is giving way to devices that perform routing functions but do not inspect each frame in as much detail. The first frame is analyzed at the network layer (layer 3), and a destination path is determined. The remaining frames of the message are forwarded at the data link layer (layer 2), which is considerably faster. Illustration of topologies for OSI layers (from the Technology Encyclopedia) Distributed Computing Architecture (Reference 3) So, you ask, what's the point of the definitions and explanations above? Well, the role of the network has become critical to integrating and extending computing processes, and thereby work processes and business operations. The network allows not just sharing of programs and information, but also the sharing and distribution of tasks to places and platforms that are available, and/or better suited to the tasks.
  22. 22. The Distributed Computing Environment (DCE) Networking relies on common methods for communicating and operating. If all the players on a network use the same proprietary communication protocols, then there is not a problem of interoperating. The reality is that this is only practical within an enterprise, and becomes less so as technology becomes better and less expensive. The solution to the interoperability problem is the adoption of standards for communication. This is simple to state but more difficult to implement. Installed base and market share play the major role in determining what becomes a standard. As an example, TCP/IP has overwhelmed the OSI network model by virtue of the popularity of the Internet. Because of the heterogeneous nature of most computing environments, a class of software called middleware has emerged. This is software that medicates communication between clients and servers, serving as an interpreter as well as providing other services to software on the network. One such technology is the Open Software Foundation's DCE software. OSF's DCE is an integrated set of operating-system- and network-independent services that support the development, use, and maintenance of distributed applications. OSF's DCE enables a manageable, transparent, and interoperable network of multivendor, multiplatform systems.(Session Reading 6) The network database A network database is a database application that runs in a network. It is a database management system (DBMS) that was designed using a client/server architecture. Most database applications are being redesigned as network database applications. This allows them to be much faster and more reliable. The speed comes from the fact that queries to the database can be shared by more than one computer working in parallel. Distributed processing A digression ... Supercomputers are designed according to two architectures: vector processing, and parallel processing. In vector processing machines (such as Cray machines) speed was achieved by optimizing the hardware to support vector operations. In 1983, the Cray X- MP used four processors to subdivide computing operations, working on computations in parallel, thus parallel computing. Since then, most supercomputers are multiprocessor machines largely because of their price and performance. Although there are a number of different architectures for multiprocessors, they all use a some kind of communication layer to share resources and/or exchange messages. (Additional Reading 8) Back to the topic ... The reason we are making this detour into computer architecture is because the same kind of parallel processing can be achieved via a network. Speeds of communication buses in
  23. 23. supercomputers are very high, much higher than current network protocols. However, network speeds and capacity are increasing. It is unlikely that a network of computers will outperform a dedicated supercomputer. There are added layers of communication overhead in a network that are not in a supercomputer. Plus, a supercomputer does not have to deal with the problem of heterogeneous hardware, software, and operating systems that is inherent on a network. But this does not mean that there are not significant benefits to distributing computational tasks to other platforms on the network. For example: parallel virtual machine software (PVM) is a software package that permits a heterogeneous collection of Unix computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable and has been compiled to run on everything from laptops to CRAYs. (Session Reading 7) PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems in addition to PVM's use as an educational tool to teach parallel programming. Objects In the previous session, I introduced network technologies into the discussion. The reason for doing this is that the notion of the computing environment has been changed by these tools. The computer is no longer an isolated data processing machine, but is integrated into the way that we communicate and do our work. The Extended Computing Environment The extended computing environment does not just mean using the network to move data around. It is a way of delegating work and sharing in-process information. To move data around among dissimilar computing platforms requires communication standards. To share processing cycles among dissimilar paltforms requires not only communication standards, but common software architectures. Objects The model which has emerged for sharing information and processing power over a network is the object-oriented programming approach. Objects are the fundamental unit in this architecture which are then agregated into higher level components such as business objects or software components. The definition and use of these objects, business objects, and components forms the basis for an SBD infrastructure. Introduction These days calling something "object-oriented" is the same as calling something "new and improved". In many cases this phrase has only buzz-word appeal with no real
  24. 24. meaning other than to be part of a technology elite, or a reason to charge the consumer more money. The world is, in fact, "object-oriented". That is, we deal with things that have properties such as color, smell, sound, shape, weight, and other attributes. These objects also do certain things such as a dog barks, a car moves, a bird flies, and, hopefully, ships float. So what's the big deal about being "object-oriented"? Being Object-oriented (Session Readings 1 and 2) (Reference 1) Objects in the real world have state (properties) and behaviors (things that the object does). Objects in software are bundles of code that contain data and procedures that act on that data. These procedures are known as methods and are used to generate behaviors. The data are the properties of the object, the values of which define the object's state. Traditional software architecture emphasises procedures. That is, first you do A, then you do B, then you do C. In object-oriented software architecture, we define object A which has the properties 1, 2,and 3. If asked by another object, object A will respond in a certain way. Software objects are connected by their relationships to each other, and through the messages that are exchanged between objects. Encapsulation In the object world (physical and software) the data about the particular object is contained by that object. This containment is called encapsulation. The data about an object and its behavior is contained in the body of the object. Implementation hiding An object's data and behavior is kept internal to the object. It only need expose the information it wishes to share with the ourside world. The object does this through some kind of interface.
  25. 25. Modularity Because implementations are hidden within the object, the object can be treated as a black box. Each object is in effect independent of other objects. This means that a change to the internals of the object can be made without affecting its relationships to other objects. Messages As in physical objects, software objects exchange information about themselves in response to messages. For example, somebody (object A) approaches you, object B, at a party and asks you your name. The message is the request for your name, and you respond (or not). If you chose to respond, you are granting permission to object A to access your data. the message could contain data about object B that object may need. The message could be "my name is object A, what's yours?" Object A would use that information to determine its reponse. Based on the information passed in the message, you may chose not to answer (a behavior). The message could also elicite a behavior from object B, such as punching object A's lights out. This gets into an issue we'll address later as to what is appropriate behavior in certain circumstances. The Pump and Motor example As an illustration of the relationships between objects, I'll introduce an example of a pump and motor.
  26. 26. In the diagram above, I have two objects: a pump and a motor. The way we use a pump and a motor is: we flip a switch, it turns the motor on, the motor turns and rotates the pump, as the pump rotates it creates the pressure differential to move fluid. Creating objects: classes We've talked about objects and how objects work. The next question is "how are objects created?" A class is the template for an object in the same way that a blueprint is a template for creating a physical product. In our pump example, all pumps have certain properties and behaviors that characterize a pump. All pumps have an inlet pressure, a geometry, and when the shaft turns, the pump generates a change in pressure. These are the characteristics that define a pump class. An object is an instance of a class. In the pump example, a specific pump may have be 24 inches long by 18 inches wide by 18 inches high and have an inlet pressure of 5 psi. When 100 rpm are applied to the shaft, the outlet pressure is 25 psi. So the pump class is the set of parameters and functions that describe a generalpump, and when parameters are entered into the class, and object is created. Inheritance When I create a pump class, I am defining all of the characteristics for a pump. There are, however, many different types of pumps (positive displacement, kinetic, centrifugal, etc.). So I may have a number of different variations as shown in the figure below.
  27. 27. Inheritance is the process of creating a new class by starting with properties and behaviors of an existing similar class, then extending those by adding more specific data and methods. The class that does the inheriting is called the child class (or subclass), and the class that provides the information is called the parent class (or superclass). In the pump example above, the parent class is "pump" which has some very basic properties such as inlet pressure, inlet loaction, outlet location, foot print, as well as behavior such as: for a given input of power there is a change in pressure and an outlet pressure is provided. The child classes are the different variations of pumps (positive displacement or kinetic pump). They inherent the same basic characteristics of a pump, but extend them with additional, more specific information, such as a different type of method for generating the pressure differential. The child classes can themselves be parent classes to even more specific child classes. The advantage of inheritance is that as software is developed, previously defined objects can be used as a starting point without having to rewrite them. Java (Session Reading 3, 4, and 5) (Reference 2) Java is a programming language developed by Sun Microsystems which was specifically designed to operate across platforms via a network. Based to a large extent on C++, Java depends on the installation of "virtual machine" (VM) software on computers that essentially operate as Java-based computers.
  28. 28. Java code is developed like any other software, in that you start with a text file and submit that file to a java compiler. The output of the Java compiler is a text file of "byte code" which are the instructions to the Java virtual machine. The virtual machine executes those instructions by accessing the resources of the supporting operating system and processor. When you encounter a Java applet embedded in an HTML page, the compiled byte code file is downloaded to your platform's virtual machine. In the case of an applet, the virtual machine may be part of the web browser. Java was also designed with security in mind. Most people don't want strange software wandering around their computer. To prevent this, a Java applet cannot read from or write to your hard disk, and has no access to other system resources. In Java version 1.1, some of the strict security restrictions are loosened to allow "trusted" applets to access your system resources using digital signatures to provide verification that the applet is from a "friendly" source. Java applications, however, are another story. A Java application has the same capabilities as any other programming language, plus the added advantages of being "write-once-run-anywhere", and of being designed for distributed computing. Java is inherently object-oriented and provides an excellent opportunity to introduce both object-oriented programming methods, and distributed computing methods to this discussion. There are competing software technologies, such as Python or ActiveX, which have similar capabilities. But Java has been the tool of choice for a number of the SBD applications. Declaring Classes In Java, the first place to start coding is in the definition of a class. The syntax for declaring a java class is: class Identifier { ClassBody } The Identifier is the name of the new class. The curly braces, {}, surround the body of the class (ClassBody). Our pump and motor example would be: class Motor { // state is "on" or "off" // method for creating rpm } class Pump{ // inlet pressure // method for creating the pressure differential } In Java, the "//" denotes a comment, and in the above code example, I've put in place holders for properties and methods. The first thing to do is turn the motor on, or change the motor's state, then send the rpm to the pump.
  29. 29. class Motor { // state is "on" or "off" boolean state = true; // check to see if the motor is "on" if (state = true) { // if the motor is on calculate the rpm from the motor float rpm() { // rpm calculation else rpm = 0.0; } } } The second thing to do here, which will make this more complete, is to add the code to declare the inlet variable as a floating point value, and define a method for the calculation of the outlet pressure for the pump. This method, outletPressure, makes a call to the Motor object and asks for the value of rpm., which is then used in the calculation of the returned value. class Pump { // set the inlet pressure float inlet_pressure = 5.0; // method for creating the pressure differential float outletPressure(Motor.rpm) { // calculate the outlet pressure using the rpm from the motor } } Deriving Classes If we go back to the hierarchy of pumps, we can derive a new class from the general class of pumps. In Java, this would look like: class KineticPump extends Pump{ // add some differentiating characteristic such as "efficiency" } Overriding Methods A derived class (a subclass) will inherit properties and methods from the parent. Sometimes, the methods in the subclass will need to be different. In Java, there is the ability to "override" the inherited methods with another, more appropriate method within the subclass. class KineticPump extends Pump{ // add a new method for calculaing the outlet pressure float outletPressure(Motor.rpm) { // calculate the outlet pressure using another function of the motor's rpm }
  30. 30. } In the pump example, the kinetic pump object would use a different method for calculating the outlet pressure, even though it would inherit most of the other properties and behaviors. Overloading Methods Another object-oriented programming technique is called "method overloading". This allows the programmer to specify differnet parameters to send to methods in an object. To overload a method, the programmer declares another version of the method using the same name but with different parameters. In the pump example, an overloaded method would look something like this: class Pump { // set the inlet pressure float inlet_pressure = 5.0; // method for creating the pressure differential float outletPressure(Motor.rpm) { // calculate the outlet pressure using the rpm from the motor } float outletPressure(Motor.torque) { // calculate the outlet pressure using the torque from the motor } float outletPressure(Motor.hp) { // calculate the outlet pressure using the horsepower from the motor } } In the example above, depending on the parameter passed to the outletPressure method, the outlet pressure will be calculated different ways. This allows the programmer to make objects extremely flexible and able to hand a range of different relationships with other objects. Object Construction Most of the design work in developing an object-oriented application involves defining classes and their relationships. When you actually create an object is when you actually create an instance of the class. To do this requires creating a "constructor" method within the class which initializes the variables to create an object. Below is an example in Java for the pump class: class Pump { public Pump(){ // set the inlet pressure float inlet_pressure = 5.0; } public Pump(float actual_inlet_pressure){
  31. 31. // pass in the inlet pressure inlet_pressure = actual_inlet_pressure } float outletPressure(Motor.rpm) { // calculate the outlet pressure using the rpm from the motor } float outletPressure(Motor.torque) { // calculate the outlet pressure using the torque from the motor } float outletPressure(Motor.hp) { // calculate the outlet pressure using the horsepower from the motor } } In this example, I have used method overloading to create two constructor methods. The first simply initializes the pump object so the inlet pressure is 5.0 psi. Alternatively, I could create an instance of the pump where I pass an actual inlet pressure as a parameter. I've also used what are called access modifiers (the "public" declaration which prefaces the method) to specifically allow any other object to access the pump construtor method. Access modifiers allow the programmer to define the interfaces to the data and operations in an object. To actually invoke the pump class and create an instance of a pump, in Java I would use the new operator. This would look like: // This creates a pump object, "aPump", which will have the default inlet pressure of 5.0 psi Pump aPump = new Pump(); // This creates a pump object, "anotherPump", which will have an inlet pressure of 10.0 psi Pump anotherPump = new Pump(10.0); Components (Reference 3) Basic objects are elemental and we can assemble these fundamental objects into larger accumulations of objects. Continuing with the pump example: when I specify a pump, I may actually be referring to an assembly which includes the pump body, the impellor, the bearings, a gasket, the shaft, the coupling, a motor, a controller, and a foundation for the whole thing. Each one of those parts is an object in its own right. I can define these assemblies as objects themselves, or components which are collections of objects, and use them to build more complex software systems. JavaBeans (Reading Session 6) A Java Bean is a reusable software component that can be visually manipulated in builder tools. To understand the precise meaning of this definition of a Bean, clarification is required for the following terms:
  32. 32. Software component - Software components are designed to apply the power and benefit of reusable, interchangeable parts from other industries to the field of software construction. Other industries have long profited from reusable components. Reusable electronic components are found on circuit boards. A typical part in your car can be replaced by a component made from one of many different competing manufacturers. Lucrative industries are built around parts construction and supply in most competitive fields. The idea is that standard interfaces allow for interchangeable, reusable components. Builder tool - The primary purpose of beans is to enable the visual construction of applications. You've probably used or seen applications like Visual Basic, Visual Age, or Delphi. These tools are referred to as visual application builders, or builder tools for short. Typically such tools are GUI applications, although they need not be. There is usually a palette of components available from which a program designer can drag items and place them on a form or client window. Visual manipulation - Application builders let you do all of this, but in addition, they let you visually hook up components, select events to be fired, and handlers for events through mouse drag, or menu selection. Very little code needs to be written by hand to get the initial component interaction working properly--at least in comparison to a GUI builder or a window builder. It's logical to wonder: "What is the difference between a Java Bean and an instance of a normal Java class?" What differentiates Beans from typical Java classes is introspection. Tools that recognize predefined patterns in method signatures and class definitions can "look inside" a Bean to determine its properties and behavior. Bean's state can be manipulated at the time it is being assembled as a part within a larger application. The application assembly is referred to as design time in contrast to run time. In order for this scheme to work, method signatures within Beans must follow a certain pattern in order for introspection tools to recognize how Beans can be manipulated, both at design time, and run time. In effect, Beans publish their attributes and behaviors through special method signature patterns that are recognized by beans-aware application construction tools. However, you need not have one of these construction tools in order to build or test your beans. The pattern signatures are designed to be easily recognized by human readers as well as builder tools. One of the first things you'll learn when building beans is how to recognize and construct methods that adhere to these patterns. Not all useful software modules should be Beans. Beans are best suited to software components intended to be visually manipulated within builder tools. Some functionality, however, is still best provided through a programatic (textual) interface, rather than a visual manipulation interface. For example, an SQL, or JDBC API would probably be better suited to packaging through a class library, rather than a Bean. Business Objects (Reading Session 7) An example of this component view of software is the idea of business objects. The Object Management Group (OMG) is a non-profit organization that has been established
  33. 33. by the major software developers (IBM, SUN, Apple, Hewlett-Packard, ORACLE, among others) to promote the use of object-oriented software technology. One of the areas they have focused on is the common definition of business objects. Business objects are representations of the properties and behavior of real-world things or concepts that are meaningful to a business. Such things as customers, products, orders, employees, trades, financial instruments, shipping containers, and vehicles are all examples of real-world things that can be represented as business objects. They provide a way of managing complexity by giving a higher-level perspective, and packaging the essential characteristics of business concepts more completely. Business objects act as participants in business processes by performing the required tasks or steps that make up business activities. These business objects can then be used to design and implement systems which exhibit a resemblance to the business that they support. This is possible because object technology allows the development of objects in software that mirror their counterparts in the real world. Business objects are components composed of elemental objects which provide for the presentation of an interface, the business process rules, and underlying business entity objects. The presentation object is an elemental object that establishes how it interacts with other business objects. The process object contains the rules and constraints that determine how the business object will operate. The entity object(s) contain the underlying data for the business object. Each object in the business model is used to create an executable representation of that object in your computer system. This executable object will contain and encapsulate the information and rules associated with that object and its relationships to other objects. Some business objects may be implemented on top of existing applications as "wrappers", providing an interface to legacy applications. This works so long as the wrappers can "speak" according to common protocol. Otherwise, consistency in the implementation environment (ie. homogenous operating systems, platforms, etc.) is not required. An application, in terms of business objects, becomes a set of cooperative business objects combined to facilitate business processes. The concept of monolithic applications become outmoded with a system composed of a set of cooperative business objects. Instead, an information system is composed of semi-autonomous but cooperative business objects which can be more easily adapted and changed. This type of component assembly and reuse has been recognized as a better way to build information systems. Objects in distributed computing Object-oriented software technology allow for reuse of code and the ability develop complex, robust applications in less time than traditional software development architectures. But it also allows for software to reflect the way that real-world entities operate, and co-operate.
  34. 34. Returning to the pump and motor example: because Java is network-aware and platform independent, the pump object could be running on a platform in Ann Arbor, and the motor object could be running on a platform at ABS in Houston. If these two objects were combined as a business object, then a shipyard in Asia could include the pump package as part of their software application. Dealing with Distributed Objects I'm sure about now you're wondering if we'll ever get to the point where we talk about simulation-based design again. The problem any course in information technology these days is that people have a range of experience and knowledge. Some folks are specialists and others are generalists. The problem is to get everybody on the same page, especially when we are talking about the range of technology that is covered by SBD. Objects Everywhere Key to simulation-based design is the distributed nature of the resources that SBD pulls together. There may be an analysis program in use by a design agent that provides information to ABS as to whether the structure meets with the approval requirements. In turn those approval requirements are needed by the design agent, the ship owner, and the shipbuilder. For systems design, there are the piping designer, the manufacturer, valve vendors, equipment suppliers, and more players in the design, construction, and operation processes. Each of these players have their own design systems, analysis tools, data structures, data bases, and business practices. We have what is called the object-web. It is almost the same as the world wide web that we know except that the different players are sharing design applications, product model information, and analysis tools through the structure provided by business practices. What makes this different from previous environments is that information is shared, not exchanged. Objects provide the structure for the software systems to be developed. The object model makes the process comprehensible, and provides the mechanisms for coordination. That is, if we are all on the same page and speaking the same language. Standards for Distributed Objects What made the World Wide Web possible was the adoption of common communications standards. In our discussion networking, it should have been apparent that what made the network possible outside of the enterprise was the establishment and adoption of communication protocols. Mearly establishing communication protocols wasn't enough. Remember the OSI model? TCP/IP overran the ISO OSI model and has become the internetworking standard. It has done so by virtue of its ubiquity, not by international agreement. An SBD environment, as so many other business environments, cannot function without some common way of allowing objects over a network to discover each other, communicate, and cooperate. In the new economics, he who holds market share is king, and sets the standards. Thus begin religious wars of CORBA, DCOM, and Java.
  35. 35. Introduction (Readings 1 and 2) We have introduced a lot of underlying technology into this course which ultimately builds up to the development of a simulation-based design environment. As we talk about the infrastructure for distributed computing, a key concern is making different platforms, operating systems, network operating systems, and applications work together. When we think of the current computing environment, we think of it in a transaction sense. That is, information is exchanged as a file sent and received. Processing of the information in that file takes place in a serial fashion. Distributed objects on the other hand allow for concurrent processes to take place. It is a very egalitarian environment: processes can take place where the best resources are available and called by those who need the information. As I mentioned in the session home page, there are holy wars going on as to who will control the next incarnation of the Internet and this emerging world wide object web. This war is between Microsoft and everybody else. In case you think this is just Microsoft bashing, consider two things: Microsoft NT is becoming a force in the enterprise network operating system market; Intel has the dominant share of the chip sets that are in most of the computers these days, significantly threatening the high end workstation market. Given these two market facts, the rest of the computer industry is in the position where these two companies can dictate the standards the rest of them will use. So where are IBM, SUN, HP, SGI, Apple, and the others? Surely we are not going to be forced to select our hardware, software, and networking options from just a pair of powerful vendors? What happens to our legacy systems? Are we going to replace all of them with new systems in order to comply with the emerging defacto standard? Common Object Request Broker Architecture (CORBA) (Reading 3, Additional Reading 1, and Reference 1) In 1989, the Object Management Group (OMG) was established to promote the theory and practice of object technology for the development of distributed computing systems. OMG's membership is currently over 800 software vendors, software developers and end users (oddly enough, including Microsoft as a contributing member). The goal is to provide a common architectural framework for object oriented applications based on widely available interface specifications. OMG realizes its goals by creating standards for interoperability and portability of distributed object-oriented applications, not by producing software or implementation guidelines. Specifications are put together using ideas of OMG members who respond to Requests For Information (RFI) and Requests For Proposals (RFP). Members submit proposals for the specifications including working prototypes of the proposed approaches. The proposed specification are then evaluated and voted on OMG members. The winning proposal is adopted as the standard. In 1991, the OMG released CORBA version 1.1 which defined the Interface Definition Language (IDL) and the Application Programming Interfaces (API) that enable client/server object interaction within a specific implementation of an Object Request
  36. 36. Broker (ORB). CORBA 2.0, adopted in December of 1994, defines true interoperability by specifying how ORBs from different vendors can interoperate. An ORB is the middleware that establishes client-server relationships between distributed objects. Through an ORB, a client applications or object can invoke a method on a server object, whether it is on the same machine or across a network. The ORB intercepts the call and finds an object that can implement the request. It then passes the parameters to the discovered object, invokes its method, and returns the results. The client does not need to know where the object is located on the network, its programming language, its operating system, or any other system aspects that are not part of an object's interface. By providing common protocols, the ORB allows for interoperability between applications on different machines in heterogeneous distributed environments, seamlessly interconnecting multiple object systems. Application developers use their own design or a recognized standard to define the protocol to be used between the devices. Protocol definition depends on the implementation language, network transport and a dozen other factors. ORBs simplify this process through a single implementation language-independent specification, the IDL. ORBs provide flexibility by letting programmers choose the most appropriate operating system, execution environment and even programming language to use for each component of a system under construction. More importantly, they allow the integration of existing components by allowing a means of modeling the legacy component using the same IDL used for creating new objects. The developer then writes "wrapper" code that translates between the ORB and the interfaces to the legacy application. There are a number of different ORB products available allowing software vendors to provide products which meet specific needs of their operational environments. Because of this and the fact that there are systems that are not CORBA-compliant, OMG has formulated the ORB interoperability architecture. The General Inter-ORB Protocol (GIOP) has been specifically defined to provide the mechanisms for ORB-to-ORB interaction over any transport protocol that meets a basic set of criteria. Versions of GIOP implemented using different transport protocols will not necessarily be directly compatible, but their interaction will be made more efficient. OMG has also specified how it is going to be implemented using the TCP/IP transport and thus defined the Internet Inter-ORB Protocol (IIOP). In order to illustrate the relationship between GIOP and IIOP, OMG points out that it is the same as between IDL and its concrete mapping, for example C++ mapping. IIOP is designed to provide "out of the box" interoperability with other compatible ORBs (TCP/IP being the most popular vendor-independent transport layer). Further, IIOP can also be used as an intermediate layer between half-bridges and in addition to its interoperability functions, vendors can use it for internal ORB messaging (although this is not required, and is only a side-effect of its definition). The specification also makes provision for a set of environment- Specific Inter-ORB Protocols (ESIOPs). These protocols should be used for "out of the box" interoperability wherever implementations using their transport are popular.
  37. 37. Interface Definition Language: an example The OMG Object Model defines common object semantics for specifying the externally visible characteristics of objects in a standard and implementation-independent way. In this model clients request services from objects (which will also be called servers) through a well-defined interface. This interface is specified in OMG IDL (Interface Definition Language). A client accesses an object by issuing a request to the object. The request is an event, and it carries information including an operation, the object reference of the service provider, and actual parameters (if any). The object reference is an object name that defines an object reliably. In this example, I'll define CORBA interfaces for our Pump and Motor. module Example { /* Class definition of MyPump which inherits from the general class of Pump */ interface MyPump:Pump { attribute float inletPressure; float outletPressure(in float rpm); } /* Class definition of MyMotor which inherits from the general class of Motor */ interface MyMotor:Motor { attribute float horsepower; float rpm(in boolean status); } } /* End Example */ In the example IDL code above I first created a namespace to group the set of class descriptions (or interfaces ). The module is the main identifier of the set of class interfaces. The interface defines a set of methods (or operations ) that a client object can invoke in the server object. An interface can have an attribute which is a value automatically assigned to or retrieved from the server object. With the float statements, I have defined a service that a client object can request of the server object and the type of data that is returned. In both myPump and myMotor, I set the parameters to be passed in to the server object. Once the IDL structure has been created, it is submitted to an IDL compiler. The IDL compiler maps the IDL to a particular language (Java, C++) and returns the classes necessary to access the referenced classes via the ORB. The IDL also generates data about the interfaces that are stored in an interface repository which is part of the ORB. CORBA also specifies the Internet InterORB Protocol (IIOP) which is the common communication mechanism for accessing objects over the Internet. The IIOP and IDL
  38. 38. function very much like the more familiar HTTP and CGI, however with significant performance advantages. Microsoft's Component Object Model (COM) (Reading 4, Additional Reading 2, and Reference 2) COM's first incarnation assumed COM objects and their clients were running on the same machine (although they could still be in the same process or in different processes). From the beginning, however, COM's designers intended to add the capability for clients to create and access objects on other machines. Although COM first made its way into the world in 1993, Distributed COM (DCOM) didn't appear until the release of Windows NT 4.0 in mid-1996. DCOM is Microsoft's alternative to CORBA. It is a distributed version of OLE (Object linking and Embedding) 2.0's Component Object Model. Like OLE's COM, the distributed COM specifies interfaces between component objects within a single application or between applications providing local/remote transparency between components across networks. Distributed COM builds on the DCE RPC (Distributed Computing Environment remote procedure call). Like CORBA, distributed COM separates interfaces from implementations and requires that all interfaces be declared using an IDL (interface definition language). However, Microsoft's IDL, based on DCE, is not CORBA-compliant.
  39. 39. A COM interface is not a class in the object-oriented sense as it is in CORBA. COM interfaces do not have state and cannot be instantiated to create a unique object. Rather, a COM interface is a group of related functions which COM clients retrieve using a pointer to access the functions in an interface. To handle named objects in the object-oriented sense, distributed COM uses the OLE moniker concept to allow instantiation of multiple objects. A client uses the pointer to reconnect to the same object instance with the same state (not just another interface pointer of the same class) at a later time. Monikers provide a combination of services, including naming, persistence, relationships, query, and object location. Distributed COM provides many of the the same capabilities as CORBA, including alternatives to CORBA's persistence, transaction services, common facilities, interface repository, and relationships. However, the important distinction between distributed COM and CORBA is the OS dependence of distributed COM. DCOM really doesn't change how a client application creates and interacts with a COM object: client uses the same code to access local and remote objects. However, a client can choose to use extras provided by DCOM. For example, DCOM includes a distributed security mechanism, providing authentication and data encryption. Another extra: to locate COM objects on other machines, DCOM can use directory services such as the Domain Name System (DNS). Many of these features are expected to be available in Windows NT 5.0. It is unlikely that Microsoft will abandon distributed COM for CORBA, and vice versa. There are a number of bridge applications that allow for communication between CORBA and distributed COM. Interoperability between OLE's non-distributed COM and CORBA is relatively easy with implementations available from IBM, Iona, Candle, and Digital. Distributed COM is harder, because of dissimilar object models, consequently components won't collaborate as effectively across the network they can within each camp. Java RMI (Additional Reading 3 and Reference 3) Java's Remote Method Invocation (RMI) is another choice for supporting distributed objects. Unlike CORBA and DCOM which allow communication between objects written in various languages, RMI is focused on communication between objects implemented in Java. This limitation adds some constraints, but it also makes RMI very simple to use and more efficient. Sun Microsystems, RMI's developers, had the luxury of designing their protocol specifically to match for the Java's features. Java Enterprise Beans (Reading 5, Additional Readings 4 and 5, and Reference 3) In the previous lecture, I introduced Java Beans. In that discussion, you may have concluded that they are just a neat toy for hooking together buttons in applets. However, the next wave of Java Beans technology, called Enterprise Java Beans extends their benefits to server systems.
  40. 40. An Enterprise Java Bean is an encapsulation of a piece of business logic. It can be executed in an environment that supports transaction-processing constructs. In fact, current transaction-processing environments, such as IBM's CICS, will support Enterprise Java Beans in the future. The basic structure of an Enterprise Java Bean is the essentially the same as that of any other Bean. The Enterprise Bean comes in a Java archive (JAR), but contains more information that defines its transaction-scope rules. The basic model for the Enterprise Bean is one of client and server, where communication between the client application built with conventional Beans providing the presentation object, and the Enterprise Beans executing in the server via remote method invocation (RMI), the CORBA Internet Inter- ORB Protocol (IIOP), or the forthcoming RMI over IIOP. People in the computer industry have been talking about the advantages of distributed objects and software components for a long time. Selecting a distributed object model has many implications, not the least of which is platform dependence. If you choose DCOM for a multitiered distributed-object solution, you have to consider introducing single- vendor proprietary systems in many places. You need to be able to answer such questions as: − How many of my existing systems support Microsoft technologies? How much will it cost to replace these systems with Microsoft-capable systems? − Will I be choosing the appropriate hardware and OS platform for my solution, or is the choice of component model giving control to a single supplier? Do I want this kind of platform lock-in? Heterogeneous networks of systems are a reality in today's business environment, yet the selection of the wrong component model for the client in an n-tiered solution could force a corporation to spend millions of dollars changing its backend systems. CORBA and other true open-architecture protocols are not just a component model for the client; it is the key to integrated, n-tiered solutions using the diverse array of platforms that make up today's business and engineering systems. What is required for an SBD environment is this kind of interoperability and platform independence -- independence from the development platform, independence from the execution platform, and independence from development tools. Distributed Collaboration and Simulation To this point in the course, we have covered the motivation for simulation-based design and much of the plumbing. The last element we need to discuss is "simulation" part of SBD. The product model is a representation of the geometry and behavior of the product being designed. It is, in itself, not a simulation in the sense that we are using the concept. The definition of simulation that we are embracing here is the one that looks at the behavior of a model in a time domain. That is, at each instant in time there is a change to the input submitted to the parts of the model that charaterize its behavior. The model then changes its relationship to the environment, which may have a subsequent change to the environment.