1. Flowchart
"Flow chart" redirects here. For the poem, see Flow Chart. For the music group, see
Flowchart (band).
A simple flowchart representing a process for dealing with a non-functioning lamp.
Flowcharts:- A flowchart is a type of diagram that represents an algorithm, workflow or
process, showing the steps as boxes of various kinds, and their order by connecting them with
arrows. This diagrammatic representation illustrates a solution model to a given problem.
Flowcharts are used in analyzing, designing, documenting or managing a process or program
in various fields.[1]
2. Overview
Flowchart of a for loop
Flowcharts are used in designing and documenting simple processes or programs. Like other
types of diagrams, they help visualize what is going on and thereby help understand a
process, and perhaps also find flaws, bottlenecks, and other less-obvious features within it.
There are many different types of flowcharts, and each type has its own repertoire of boxes
and notational conventions. The two most common types of boxes in a flowchart are:
a processing step, usually called activity, and denoted as a rectangular box
a decision, usually denoted as a diamond.
A flowchart is described as "cross-functional" when the page is divided into different
swimlanes describing the control of different organizational units. A symbol appearing in a
particular "lane" is within the control of that organizational unit. This technique allows the
author to locate the responsibility for performing an action or making a decision correctly,
showing the responsibility of each organizational unit for different parts of a single process.
3. Flowcharts depict certain aspects of processes and they are usually complemented by other
types of diagram. For instance, Kaoru Ishikawa defined the flowchart as one of the seven
basic tools of quality control, next to the histogram, Pareto chart, check sheet, control chart,
cause-and-effect diagram, and the scatter diagram. Similarly, in UML, a standard concept-
modeling notation used in software development, the activity diagram, which is a type of
flowchart, is just one of many different diagram types.
Nassi-Shneiderman diagrams and Drakon-charts are an alternative notation for process flow.
Common alternative names include: flow chart, process flowchart, functional flowchart,
process map, process chart, functional process chart, business process model, process model,
process flow diagram, work flow diagram, business flow diagram. The terms "flowchart" and
"flow chart" are used interchangeably.
The underlying graph structure of a flowchart is a flow graph, which abstracts away node
types, their contents and other ancillary information.
History
The first structured method for documenting process flow, the "flow process chart", was
introduced by Frank and Lillian Gilbreth to members of the American Society of Mechanical
Engineers (ASME) in 1921 in the presentation "Process Charts: First Steps in Finding the
One Best Way to do Work".[2] The Gilbreths' tools quickly found their way into industrial
engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen began
training business people in the use of some of the tools of industrial engineering at his Work
Simplification Conferences in Lake Placid, New York.
A 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and
Gamble where he developed their Deliberate Methods Change Program. Another 1944
graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register Industrial,
adapted the flow process chart to information processing with his development of the multi-
flow process chart to display multiple documents and their relationships.[3] In 1947, ASME
adopted a symbol set derived from Gilbreth's original work as the "ASME Standard:
Operation and Flow Process Charts."[4]
4. Douglas Hartree in 1949 explained that Herman Goldstine and John von Neumann had
developed a flowchart (originally, diagram) to plan computer programs.[5] His contemporary
account is endorsed by IBM engineers[6] and by Goldstine's personal recollections.[7] The
original programming flowcharts of Goldstine and von Neumann can be seen in their
unpublished report, "Planning and coding of problems for an electronic computing
instrument, Part II, Volume 1" (1947), which is reproduced in von Neumann's collected
works.[8]
Flowcharts became a popular means for describing computer algorithms. The popularity of
flowcharts decreased in the 1970s when interactive computer terminals and third-generation
programming languages became common tools for computer programming. Algorithms can
be expressed much more concisely as source code in such languages. Often pseudo-code is
used, which uses the common idioms of such languages without strictly adhering to the
details of a particular one.
Nowadays flowcharts are still used for describing computer algorithms.[9] Modern techniques
such as UML activity diagrams and Drakon-charts can be considered to be extensions of the
flowchart.
Types
Sterneckert (2074703) suggested that flowcharts can be modeled from the perspective of
different user groups (such as managers, system analysts and clerks) and that there are four
general types:[10]
Document flowcharts, showing controls over a document-flow through a system
Data flowcharts, showing controls over a data-flow in a system
System flowcharts, showing controls at a physical or resource level
Program flowchart, showing the controls in a program within a system
Notice that every type of flowchart focuses on some kind of control, rather than on the
particular flow itself.[10]
However, there are several of these classifications. For example, Andrew Veronis (1978)
named three basic types of flowcharts: the system flowchart, the general flowchart, and the
5. detailed flowchart.[11] That same year Marilyn Bohl (1978) stated "in practice, two kinds of
flowcharts are used in solution planning: system flowcharts and program flowcharts...".[12]
More recently Mark A. Fryman (2001) stated that there are more differences: "Decision
flowcharts, logic flowcharts, systems flowcharts, product flowcharts, and process flowcharts
are just a few of the different types of flowcharts that are used in business and
government".[13]
In addition, many diagram techniques exist that are similar to flowcharts but carry a different
name, such as UML activity diagrams.
Building blocks
Common symbols
The American National Symbols Institute (ANSI) set standards for flowcharts and their
symbols in the 1960s.[14] The International Organizations for Standardization (ISO) adopted
the ANSI symbols in 1970.[15] The current standard was revised in 1985.[16] Generally,
flowcharts flow from top to bottom and left to right.[17]
ANSI/ISO
Shape
Name Description
Flowline
(Arrowhead)[15]
Shows the program's order of operation. A line coming
from one symbol and ending at another.[14] Arrowheads
are added if the flow is not the standard top-to-bottom,
left-to right.[15]
Terminal[14]
Beginning or ending of a program or sub-process.
Represented as a stadium,[14] oval or rounded (fillet)
rectangle. They usually contain the word "Start" or "End",
or another phrase signaling the start or end of a process,
such as "submit inquiry" or "receive product".
Process[15] Set of operations that change value, form, or location of
data. Represented as a rectangle.[15]
Decision[15]
Conditional operation determining which of two paths the
program will take.[14] The operation is commonly a yes/no
question or true/false test. Represented as a diamond
(rhombus).[15]
Input/Output[15] Input and output of data,[15] as in entering data or
displaying results. Represented as a parallelogram.[14]
6. Annotation[14]
(Comment)[15]
Additional information about a step the program.
Represented as an open rectangle with a dashed or solid
line connecting it to the corresponding symbol in the
flowchart.[15]
Predefined
Process[14]
Named process which is defined elsewhere. Represented
as a rectangle with double-struck vertical edges.[14]
On-page
Connector[14]
Pairs of labeled connectors replace long or confusing lines
on a flowchart page. Represented by a small circle with a
letter inside.[14][18]
Off-page
Connector[14]
A labeled connector for use when the target is on another
page. Represented as a home plate-shaped pentagon.[14][18]
Other symbols
The ANSI/ISO standards include symbols beyond the basic shapes. Some are:[17][18]
Data File or Database represented by a cylinder (disk drive).
Document represented as a rectangle with a wavy base.
Manual input represented by quadrilateral, with the top irregularly sloping up from
left to right like the side view of a keyboard.
Manual operation represented by a trapezoid with the longest parallel side at the top,
to represent an operation or adjustment to process that can only be made manually.
Parallel Mode represented by two horizontal lines at the beginning or ending of
simultaneous operations[17]
Preparation or Initialization represented by an elongated hexagon, originally used for
steps like setting a switch or initializing a routine.
For parallel and concurrent processing the Parallel Mode horizontal lines[19] or a horizontal
bar[20] indicate the start or end of a section of processes that can be done independently:
At a fork, the process creates one or more additional processes, indicated by a bar
with one incoming path and two or more outgoing paths.
At a join, two or more processes continue as a single process, indicated by a bar with
several incoming paths and one outgoing path. All processes must complete before
the single process continues.[20]
Software
7. Diagramming
Flowgorithm
Any drawing program can be used to create flowchart diagrams, but these will have no
underlying data model to share data with databases or other programs such as project
management systems or spreadsheet. Some tools such as yEd, Inkscape and Microsoft Visio
offer special support for flowchart drawing. Many software packages exist that can create
flowcharts automatically, either directly from a programming language source code, or from
a flowchart description language. On-line web-based versions of such programs are available.
There are several applications and visual programming languages[21] that use flowcharts to
represent and execute programs. Generally these are used as teaching tools for beginner
students. Examples include Flowgorithm, Raptor. LARP, Visual Logic, and VisiRule.
See also
Related diagrams
Activity diagram
Control flow diagram
Control flow graph
Data flow diagram
Deployment flowchart
Drakon-chart
Flow map
Functional flow block diagram
Nassi–Shneiderman diagram
State diagram
Related subjects
8. Warnier/Orr diagram
Augmented transition network
Business process mapping
Interactive EasyFlow
Process architecture
Pseudocode
Recursive transition network
Unified Modeling Language (UML)
Workflow
References
1.
SEVOCAB: Software Systems Engineering Vocabulary. Term: Flow chart. Retrieved 31
July 2008.
Frank Bunker Gilbreth, Lillian Moller Gilbreth (1921) Process Charts. American
Society of Mechanical Engineers.
Graham, Jr., Ben S. (10 June 1996). "People come first". Keynote Address at Workflow
Canada.
American Society of Mechanical Engineers (1947) ASME standard; operation and flow
process charts. New York, 1947. (online version)
Hartree, Douglas (1949). Calculating Instruments and Machines. The University of
Illinois Press. p. 112.
Bashe, Charles (1986). IBM's Early Computers. The MIT Press. p. 327.
Goldstine, Herman (1972). The Computer from Pascal to Von Neumann. Princeton
Taub, Abraham (1963). John von Neumann Collected Works. 5. Macmillan. pp. 80–151.
Bohl, Rynn: "Tools for Structured and Object-Oriented Design", Prentice Hall, 2007.
Alan B. Sterneckert (2003) Critical Incident Management. p. 126
Andrew Veronis (1978) Microprocessors: Design and Applications. p. 111
Marilyn Bohl (1978) A Guide for Programmers. p. 65.
Mark A. Fryman (2001) Quality and Process Improvement. p. 169.
Gary B. Shelly; Misty E. Vermaat (2011). Discovering Computers, Complete: Your
Interactive Guide to the Digital World. Cengage Learning. pp. 691–693. ISBN 1-111-53032-
7.
Harley R. Myler (1998). "2.3 Flowcharts". Fundamentals of Engineering Programming
with C and Fortran. Cambridge University Press. pp. 32–36. ISBN 978-0-521-62950-8.
"ISO 5807:1985". International Organization for Standardization. February 1985.
Retrieved 23 July 2017.
Flowcharting Techniques GC20-8152-1. IBM. March 1970. p. 10.
"What do the different flowchart shapes mean?". RFF Electronics. Retrieved 23 July
2017.
Jonathan W. Valvano (2011). Embedded Microcomputer Systems: Real Time
Interfacing. Cengage Learning. pp. 131–132. ISBN 1-111-42625-2.
Robbie T. Nakatsu (2009). Reasoning with Diagrams: Decision-Making and Problem-
Solving with Diagrams. John Wiley & Sons. pp. 68–69. ISBN 978-0-470-40072-2.
9. 21. Myers, Brad A. "Visual programming, programming by example, and program
visualization: a taxonomy." ACM SIGCHI Bulletin. Vol. 17. No. 4. ACM, 1986.
Computer-aided software engineering
Example of a CASE tool.
Computer-aided software engineering (CASE) is the domain of software tools used to
design and implement applications. CASE tools are similar to and were partly inspired by
computer-aided design (CAD) tools used for designing hardware products. CASE tools are
used for developing high-quality, defect-free, and maintainable software.[1] CASE software is
often associated with methods for the development of information systems together with
automated tools that can be used in the software development process.[2]
Contents
1 History
2 CASE software
o 2.1 Tools
o 2.2 Workbenches
o 2.3 Environments
3 Major CASE Risk Factors
4 See also
5 References
10. History
The Information System Design and Optimization System (ISDOS) project, started in 1968 at
the University of Michigan, initiated a great deal of interest in the whole concept of using
computer systems to help analysts in the very difficult process of analysing requirements and
developing systems. Several papers by Daniel Teichroew fired a whole generation of
enthusiasts with the potential of automated systems development. His Problem Statement
Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it
predated the term.[3]
Another major thread emerged as a logical extension to the data dictionary of a database. By
extending the range of metadata held, the attributes of an application could be held within a
dictionary and used at runtime. This "active dictionary" became the precursor to the more
modern model-driven engineering capability. However, the active dictionary did not provide
a graphical representation of any of the metadata. It was the linking of the concept of a
dictionary holding analysts' metadata, as derived from the use of an integrated set of
techniques, together with the graphical representation of such data that gave rise to the earlier
versions of CASE.[4]
The term was originally coined by software company Nastec Corporation of Southfield,
Michigan in 1982 with their original integrated graphics and text editor GraphiText, which
also was the first microcomputer-based system to use hyperlinks to cross-reference text
strings in documents—an early forerunner of today's web page link. GraphiText's successor
product, DesignAid, was the first microprocessor-based tool to logically and semantically
evaluate software and system design diagrams and build a data dictionary.
Under the direction of Albert F. Case, Jr. vice president for product management and
consulting, and Vaughn Frick, director of product management, the DesignAid product suite
was expanded to support analysis of a wide range of structured analysis and design
methodologies, including those of Ed Yourdon and Tom DeMarco, Chris Gane & Trish
Sarson, Ward-Mellor (real-time) SA/SD and Warnier-Orr (data driven).[5]
The next entrant into the market was Excelerator from Index Technology in Cambridge,
Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen
networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While,
11. at the time of launch, and for several years, the IBM platform did not support networking or a
centralized database as did the Convergent Technologies or Burroughs machines, the allure
of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator
were a rash of offerings from companies such as Knowledgeware (James Martin, Fran
Tarkenton and Don Addington), Texas Instrument's IEF and Andersen Consulting's
FOUNDATION toolset (DESIGN/1, INSTALL/1, FCP).
CASE tools were at their peak in the early 1990s.[6] At the time IBM had proposed AD/Cycle,
which was an alliance of software vendors centered on IBM's Software repository using IBM
DB2 in mainframe and OS/2:
The application development tools can be from several sources: from IBM, from
vendors, and from the customers themselves. IBM has entered into relationships with
Bachman Information Systems, Index Technology Corporation, and Knowledgeware
wherein selected products from these vendors will be marketed through an IBM
complementary marketing program to provide offerings that will help to achieve
complete life-cycle coverage.[7]
With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the
market for the mainstream CASE tools of today. Many of the leaders of the CASE market of
the early 1990s ended up being purchased by Computer Associates, including IEW, IEF,
ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend
that led to the evolution of CASE tools was the rise of object-oriented methods and tools.
Most of the various tool vendors added some support for object-oriented methods and tools.
In addition new products arose that were designed from the bottom up to support the object-
oriented approach. Andersen developed its project Eagle as an alternative to Foundation.
Several of the thought leaders in object-oriented development each developed their own
methodology and CASE tool set: Jacobsen, Rumbaugh, Booch, etc. Eventually, these diverse
tool sets and methods were consolidated via standards led by the Object Management Group
(OMG). The OMG's Unified Modelling Language (UML) is currently widely accepted as the
industry standard for object-oriented modeling.
CASE software
A. Fuggetta classified CASE software into 3 categories:[8]
12. 1. Tools support specific tasks in the software life-cycle.
2. Workbenches combine two or more tools focused on a specific part of the software
life-cycle.
3. Environments combine two or more tools or workbenches and support the complete
software life-cycle.
Tools
CASE tools supports specific tasks in the software development life-cycle. They can be
divided into the following categories:
1. Business and Analysis modeling. Graphical modeling tools. E.g., E/R modeling,
object modeling, etc.
2. Development. Design and construction phases of the life-cycle. Debugging
environments. E.g., GNU Debugger.
3. Verification and validation. Analyze code and specifications for correctness,
performance, etc.
4. Configuration management. Control the check-in and check-out of repository objects
and files. E.g., SCCS, CMS.
5. Metrics and measurement. Analyze code for complexity, modularity (e.g., no "go
to's"), performance, etc.
6. Project management. Manage project plans, task assignments, scheduling.
Another common way to distinguish CASE tools is the distinction between Upper CASE and
Lower CASE. Upper CASE Tools support business and analysis modeling. They support
traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure
charts, Decision Trees, Decision tables, etc. Lower CASE Tools support development
activities, such as physical design, debugging, construction, testing, component integration,
maintenance, and reverse engineering. All other activities span the entire life-cycle and apply
equally to upper and lower CASE.[9]
Workbenches
Workbenches integrate two or more CASE tools and support specific software-process
activities. Hence they achieve:
13. a homogeneous and consistent interface (presentation integration).
seamless integration of tools and tool chains (control and data integration).
An example workbench is Microsoft's Visual Basic programming environment. It
incorporates several development tools: a GUI builder, smart code editor, debugger, etc. Most
commercial CASE products tended to be such workbenches that seamlessly integrated two or
more tools. Workbenches also can be classified in the same manner as tools; as focusing on
Analysis, Development, Verification, etc. as well as being focused on upper case, lower case
or processes such as configuration management that span the complete life-cycle.
Environments
An environment is a collection of CASE tools or workbenches that attempts to support the
complete software process. This contrasts with tools that focus on one specific task or a
specific part of the life-cycle. CASE environments are classified by Fuggetta as follows:[8]
1. Toolkits. Loosely coupled collections of tools. These typically build on operating
system workbenches such as the Unix Programmer's Workbench or the VMS VAX
set. They typically perform integration via piping or some other basic mechanism to
share data and pass control. The strength of easy integration is also one of the
drawbacks. Simple passing of parameters via technologies such as shell scripting can't
provide the kind of sophisticated integration that a common repository database can.
2. Fourth generation. These environments are also known as 4GL standing for fourth
generation language environments due to the fact that the early environments were
designed around specific languages such as Visual Basic. They were the first
environments to provide deep integration of multiple tools. Typically these
environments were focused on specific types of applications. For example, user-
interface driven applications that did standard atomic transactions to a relational
database. Examples are Informix 4GL, and Focus.
3. Language-centered. Environments based on a single often object-oriented language
such as the Symbolics Lisp Genera environment or VisualWorks Smalltalk from
Parcplace. In these environments all the operating system resources were objects in
the object-oriented language. This provides powerful debugging and graphical
opportunities but the code developed is mostly limited to the specific language. For
this reason, these environments were mostly a niche within CASE. Their use was
14. mostly for prototyping and R&D projects. A common core idea for these
environments was the model-view-controller user interface that facilitated keeping
multiple presentations of the same design consistent with the underlying model. The
MVC architecture was adopted by the other types of CASE environments as well as
many of the applications that were built with them.
4. Integrated. These environments are an example of what most IT people tend to think
of first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen
Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These
environments attempt to cover the complete life-cycle from analysis to maintenance
and provide an integrated database repository for storing all artifacts of the software
process. The integrated software repository was the defining feature for these kinds of
tools. They provided multiple different design models as well as support for code in
heterogenous languages. One of the main goals for these types of environments was
"round trip engineering": being able to make changes at the design level and have
those automatically be reflected in the code and vice versa. These environments were
also typically associated with a particular methodology for software development. For
example, the FOUNDATION CASE suite from Andersen was closely tied to the
Andersen Method/1 methodology.
5. Process-centered. This is the most ambitious type of integration. These environments
attempt to not just formally specify the analysis and design objects of the software
process but the actual process itself and to use that formal process to control and
guide software projects. Examples are East, Enterprise II, Process Wise, Process
Weaver, and Arcadia. These environments were by definition tied to some
methodology since the software process itself is part of the environment and can
control many aspects of tool invocation.
In practice, the distinction between workbenches and environments was flexible. Visual
Basic for example was a programming workbench but was also considered a 4GL
environment by many. The features that distinguished workbenches from environments were
deep integration via a shared repository or common language and some kind of methodology
(integrated and process-centered environments) or domain (4GL) specificity.[8]
Major CASE Risk Factors
15. Some of the most significant risk factors for organizations adopting CASE technology
include:
Inadequate standardization. Organizations usually have to tailor and adopt
methodologies and tools to their specific requirements. Doing so may require
significant effort to integrate both divergent technologies as well as divergent
methods. For example, before the adoption of the UML standard the diagram
conventions and methods for designing object-oriented models were vastly different
among followers of Jacobsen, Booch, and Rumbaugh.
Unrealistic expectations. The proponents of CASE technology—especially vendors
marketing expensive tool sets—often hype expectations that the new approach will be
a silver bullet that solves all problems. In reality no such technology can do that and if
organizations approach CASE with unrealistic expectations they will inevitably be
disappointed.
Inadequate training. As with any new technology, CASE requires time to train people
in how to use the tools and to get up to speed with them. CASE projects can fail if
practitioners are not given adequate time for training or if the first project attempted
with the new technology is itself highly mission critical and fraught with risk.
Inadequate process control. CASE provides significant new capabilities to utilize new
types of tools in innovative ways. Without the proper process guidance and controls
these new capabilities can cause significant new problems as well.[10]
See also
Data modeling
Domain-specific modeling
Method engineering
Model-driven architecture
Modeling language
Rapid application development
Model-based architecture
References
1.
16. Kuhn, D.L (1989). "Selecting and effectively using a computer aided software engineering
tool". Annual Westinghouse computer symposium; 6–7 Nov 1989; Pittsburgh, PA (U.S.);
DOE Project.
P. Loucopoulos and V. Karakostas (1995). System Requirements Engineerinuality
software which will perform effectively.
Teichroew, Daniel; Hershey, Ernest Allen (1976). "PSL/PSA a computer-aided
technique for structured documentation and analysis of information processing systems".
Proceeding ICSE '76 Proceedings of the 2nd international conference on Software
engineering. IEEE Computer Society Press.
Coronel, Carlos; Morris, Steven (February 4, 2014). Database Systems: Design,
Implementation, & Management. Cengage Learning. pp. 695–700. ISBN 1285196147.
Retrieved 25 November 2014.
Case, Albert (Fall 1985). "Computer-aided software engineering (CASE): technology
for improving software development productivity". ACM SIGMIS Database. 17 (1): 35–43.
Yourdon, Ed (Jul 23, 2001). "Can XP Projects Grow?". Computerworld. Retrieved 25
November 2014.
"AD/Cycle strategy and architecture", IBM Systems Journal, Vol 29, NO 2, 1990; p.
172.
Alfonso Fuggetta (December 1993). "A classification of CASE technology". Computer.
26 (12): 25–38. doi:10.1109/2.247645. Retrieved 2009-03-14.
Software Engineering: Tools, Principles and Techniques by Sangeeta Sabharwal, Umesh
Publications
10. Computer Aided Software Engineering. In: FFIEC IT Examination Handbook
InfoBase. Retrieved 3 Mar 2012.
17. Simulation software
From Wikipedia, the free encyclopedia
Simulation software is based on the process of modeling a real phenomenon with a set of
mathematical formulas. It is, essentially, a program that allows the user to observe an
operation through simulation without actually performing that operation. Simulation software
is used widely to design equipment so that the final product will be as close to design specs as
possible without expensive in process modification. Simulation software with real-time
response is often used in gaming, but it also has important industrial applications. When the
penalty for improper operation is costly, such as airplane pilots, nuclear power plant
operators, or chemical plant operators, a mock up of the actual control panel is connected to a
real-time simulation of the physical response, giving valuable training experience without
fear of a disastrous outcome.
Advanced computer programs can simulate power system behavior, weather conditions,
electronic circuits, chemical reactions, mechatronics, heat pumps, feedback control systems,
atomic reactions, even complex biological processes. In theory, any phenomena that can be
reduced to mathematical data and equations can be simulated on a computer. Simulation can
be difficult because most natural phenomena are subject to an almost infinite number of
influences. One of the tricks to developing useful simulations is to determine which are the
most important factors that affect the goals of the simulation.
18. In addition to imitating processes to see how they behave under different conditions,
simulations are also used to test new theories. After creating a theory of causal relationships,
the theorist can codify the relationships in the form of a computer program. If the program
then behaves in the same way as the real process, there is a good chance that the proposed
relationships are correct.
Contents
1 General simulation
2 Electronic Simulation
3 PLC Simulation
4 Sheet metal forming simulation
5 Metal Casting Simulation
6 Network Protocol Simulation
7 Computer Performance Evaluation
8 See also
9 References
General simulation
General simulation packages fall into two categories: discrete event and continuous
simulation. Discrete event simulations are used to model statistical events such as customers
arriving in queues at a bank. By properly correlating arrival probabilities with observed
behavior, a model can determine optimal queue count to keep queue wait times at a specified
level. Continuous simulators are used to model a wide variety of physical phenomena like
ballistic trajectories, human respiration, electric motor response, radio frequency data
communication, steam turbine power generation etc. Simulations are used in initial system
design to optimize component selection and controller gains, as well as in Model Based
Design systems to generate embedded control code. Real-time operation of continuous
simulation is used for operator training and off-line controller tuning.
There are four main renowned simulation approaches: Event-Scheduling method, Activity
Scanning, Process- Interaction, and Three-Phase approach, in comparison, the following can
be noted:
19. The Event-Scheduling method is simpler and only has two phases so there is no Cs and Bs,
this allow the program to run faster since there are no scanning for the condition nal events.
All these advantages also tells us something about the disadvantages of the method since
there are only two phase then all events are mixed (no Bs and Cs) then the method is not
parsimony, which means it is very hard to enhance (Pidd,1998). The Activity Scanning
approach is also simpler than the Three-Phase method since it has no calendar, and it support
the parsimonious modeling. However this approach is much slower than Three-Phase since it
treats all activities are treated as conditional. On the other hand, the executive has two phases.
Usually this approach is confused with the Three-Phase method (Pidd, 1998). The Process-
Interaction “share two common advantages first; they avoid programs that are slow to run.
Second, they avoid the need to think through all possible logical consequences of an event”
(Pidd, 1998). Yet, as (Pidd, 1998) claims this approach suffers from DEADLOCK problem,
but this approach is very attractive for novice modelers. Although, (Schriber et al, 2003).
Says “process interaction was understood only by an elite group of individuals and was
beyond the reach of ordinary programmers”. In fact (Schriber et al, 2003).adds “.Multi-
threaded applications were talked about in computer science classes, but rarely used in the
broader community”. Which indicates that the implementation of Process-Interaction was
very difficult to implement. The obvious contradiction, in the previous quote is due to the
mix up between the Process Interaction approach and the Transaction-flow approach. To see
the complete idea of the origins of Transaction-Flow best stated by (Schriber et al, 2003):
This was the primordial soup out of which the Gordon Simulator arose. Gordon’s transaction
flow world-view was a cleverly disguised form of process interaction that put the process
interaction approach within the grasp of ordinary users. . Gordon did one of the great
packaging jobs of all time. He devised a set of building blocks that could be put together to
build a flowchart that graphically depicted the operation of a system. Under this modeling
paradigm, the flow of elements through a system was readily visible, because that was the
focus of the whole approach. The Three-Phase approach allows to “simulate parallelism,
whilst avoiding deadlock” (Pidd and Cassel, 1998). Yet, Three-Phase has to scan through the
schedule for bound activities, and then scans through all conditional activities which slow it
down. Yet many forgo the time spent in return for solving the deadlock problem. In fact,
Three-Phase is used in distributed systems weather talking about operating systems,
databases, etc, under different names among them Three-Phase commit see (Tanenbaum and
Steen, 2002).[1]
20. Electronic Simulation
Electronics simulation software utilizes mathematical models to replicate the behaviour of an
actual electronic device or circuit. Essentially, it is a computer program that converts a
computer into a fully functioning electronics laboratory. Electronics simulators integrate a
schematic editor, SPICE simulator and onscreen waveforms and make “what-if” scenarios
easy and instant. By simulating a circuit’s behaviour before actually building it greatly
improves efficiency and provides insights into the behavior and stability of electronics circuit
designs. Most simulators use a SPICE engine that simulates analog, digital and mixed A/D
circuits for exceptional power and accuracy. They also typically contain extensive model and
device libraries. While these simulators typically have printed circuit board (PCB) export
capabilities, they are not essential for design and testing of circuits, which is the primary
application of electronic circuit simulation.
While there are strictly analog [2] electronics circuit simulators include both analog and event-
driven digital simulation[3] capabilities, and are known as mixed-mode simulators.[4] This
means that any simulation may contain components that are analog, event driven (digital or
sampled-data), or a combination of both. An entire mixed signal analysis can be driven from
one integrated schematic. All the digital models in mixed-mode simulators provide accurate
specification of propagation time and rise/fall time delays.
The event driven algorithm provided by mixed-mode simulators is general purpose and
supports non-digital types of data. For example, elements can use real or integer values to
simulate DSP functions or sampled data filters. Because the event driven algorithm is faster
than the standard SPICE matrix solution simulation time is greatly reduced for circuits that
use event driven models in place of analog models.[5]
Mixed-mode simulation is handled on three levels; (a) with primitive digital elements that use
timing models and the built-in 12 or 16 state digital logic simulator, (b) with subcircuit
models that use the actual transistor topology of the integrated circuit, and finally, (c) with In-
line Boolean logic expressions.
Exact representations are used mainly in the analysis of transmission line and signal integrity
problems where a close inspection of an IC’s I/O characteristics is needed. Boolean logic
expressions are delay-less functions that are used to provide efficient logic signal processing
21. in an analog environment. These two modeling techniques use SPICE to solve a problem
while the third method, digital primitives, use mixed mode capability. Each of these methods
has its merits and target applications. In fact, many simulations (particularly those which use
A/D technology) call for the combination of all three approaches. No one approach alone is
sufficient.
PLC Simulation
In order to properly understand the operation of a programmable logic controller (PLC), it is
necessary to spend considerable time programming, testing, and debugging PLC programs.
PLC systems are inherently expensive, and down-time is often very costly. In addition, if a
PLC is programmed incorrectly it can result in lost productivity and dangerous conditions.
PLC simulation software is a valuable tool in the understanding and learning of PLCs and to
keep this knowledge refreshed and up to date.[6] PLC simulation provides users with the
ability to write, edit and debug programs written using a tag-based format. Many of the most
popular PLCs use tags, which are a powerful method of programming PLCs but also more
complex. PLC simulation integrates tag-based ladder logic programs with 3D interactive
animations to enhance the user’s learning experience.[7] These interactive animations include
traffic lights, batch processing, and bottling lines.[8]
PLCLogix Traffic Light Simulation
By using PLC simulation, PLC programmers have the freedom to try all the "what-if"
scenarios changing ladder logic instructions and programs, then re-running the simulation to
see how changes affect the PLC's operation and performance. This type of testing is often not
feasible using hardwired operating PLCs that control processes often worth hundreds of
thousands – or millions of dollars.[9]
Sheet metal forming simulation
Sheet metal forming simulation software utilizes mathematical models to replicate the
behavior of an actual metal sheet manufacturing process.[citation needed] Essentially, it is a
computer program that converts a computer into a fully functioning metal manufacturing
22. prediction unit. Sheet metal forming simulation prevents metal factories from defects in their
production lines and reduces testing and expensive mistakes improving efficiency in the
metal forming process.[citation needed]
Metal Casting Simulation
Metal casting simulation is currently performed by Finite Element Method simulation
software designed as a defect-prediction tool for the foundry engineer, in order to correct
and/or improve his/her casting process, even before prototype trials are produced. The idea is
to use information to analyze and predict results in a simple and effective manner to simulate
different processes such as:
Gravity sand casting.
Gravity die casting.
Gravity tilt pouring.
Low pressure die casting.
High pressure die casting.
The software would normally have the following specifications:
Graphical interface and mesh tools
Mould filling solver
Solidification and cooling solver: Thermal and thermo-mechanical (Casting
shrinkage).
Network Protocol Simulation
The interaction between the different network entities is defined by various communication
protocols. Network simulation software simulates behavior of networks on a protocol level.
Network Protocol Simulation software can be used to develop test scenarios, understand the
network behavior against certain protocol messages, compliance of new protocol stack
implementation, Protocol Stack Testing. These simulators are based on telecommunications
protocol architecture specifications developed by international standards body such as the
ITU-T, IEEE, and so on. The output of protocol simulation software can be detailed packet
traces, events logs etc.
Computer Performance Evaluation
Understanding that computers are made of many components, and each component has many
different attributes from different manufacturer, accordingly, computer performance
evaluation is another application where simulation would be of paramount significance.
Particularly since experimenting with all the possible scenarios is nearly impossible. As such,
the commercial simulation packages caught on this fact and two packages offered this
application namely: AnyLogic 5.0, Visual Simulation Environment.[1]
See also
23. Software portal
List of computer simulation software
List of discrete event simulation software
Application Simulation Software
Electronic circuit simulation
Full system simulator
Instruction set simulator
Logic simulation
Microarchitecture Simulation
Network simulation
Process simulation
Training Simulation
Business simulation
Virtual prototyping
References
1.
Abu-Taieh, Evon (2007). "COMMERCIAL SIMULATION PACKAGES: A COMPARATIVE
STUDY" (PDF). I.J. of SIMULATION. 8: 8.
Mengue and Vignat, Entry in the University of Marne, at Vallee
P. Fishwick, Entry in the University of Florida
J. Pedro and N. Carvalho, Entry in the Universidade de Aveiro, Portugal
L. Walken and M. Bruckner, Event-Driven Multimodal Technology
PLC simulation applications
Article about PLCLogix
Article referencing 3DWorlds