The concept of software architecture and architectural design
study course: Software engineering
lecturer: Mr. van den Hombergh
date: March 23, 2010
To build a framework for a software application it is necessary to have a good and complete
specification. That is, knowing what the customer wants to have and how reliable and fast the software
system should be. Furthermore the software-architects must have a clear view on all parts of the
system. This means: everything should be clear.
Another necessary element which is needed to build such a framework is the knowledge of the used
programming language and whether it is an object oriented one, or not. The software-architect needs
this knowledge to decide which software-model should be used and how the dataflow is. The dataflow
influences the decomposition of the software.
The decision of a particular software-architecture influences some non-functional requirements such as:
- Acces protection
For this reason the software-architect has to think about which non-functional requirement is the main
factor to get the customer happy. If, for instance, the customer wants a system which can handle a lot of
requests or data in a short time, the underlying structure must consider it. This could be using a certain
particular programming-language such as C instead of Java, and/or using a particular model, for
example: a centralized structure instead of a layered one.
The following sections will discuss those considerations in more detail and hopefully will give you a better
understanding of the whole process.
2 Architectural design decisions
Basically, the architecture of a software system is influenced or determined by several functional and non-
functional requirements. This requirements involve the purpose of the system as described by the
customer, which may already suggest a suitable architectural design pattern. These patters, like Server-
Client and Model-View-Controller, are well defined and have proven of value in practice. But since software
systems are mostly unique in terms of the customers needs and conditions and every architectural pattern
comes with advantages as well as disadvantages, the software designer has to deliberate which pattern,
alteration or combination of patterns fits best in his particular situation.
In particular, the general idea of the Personal Museum Guide is a user friendly configuration tool for
interactive hardware with multimedia involvement and predefined routes. Therefore, a tree-tier-
architecture, consisting of a data-, logic- and visualisation-tier, comes to mind.
In a new project, communication with the customer is essential to get a clear picture of how he expects to
result to work and look like to be able to define the requirements the system has to meet.
In case of the Personal Museum Guide, we have to retrieve these requirements from the documentation
and determine to which degree the already implemented system meets them. The most obvious
architectural decision was the use of the before mentioned three-tier-architecture in form of a GUI for user
input, a business-logic for processing and a database for storage and accessibility by the interactive system.
This part of the software gets triggered by sensors of actuators along a visitors storyline which are
registered to currently active roles to determine the response according to the data retrieved by the sensor,
resulting in an interaction with the actuator corresponding to the role associated to the sensor and the
visitors role of choice. Designing this component around the Observer pattern is a common practice that
has proven it's worth. Since the data is stored in a SQL database running on a web-server, one of the
business-logics interfaces is the client part of a client-server-approach, which offers the advantage of
reusability of data and predefined data sets. The configuration GUI keeps data management flexible which
facilitates adjustments in the database to reflect changes in the museums range.
3 System organisation
Within the design phase, a software engineer has to make a decision on the overall organisational model of
a system. Some of the common organisational models are the repository-, the client-server- and the
layered-model. In this section, the focus lies on these three models because they are mainly used in
3.1. The repository model
The repository model was designed, so that sub-systems can exchange information effectively within a
system. There are basically two variations. The first one shares data in a central database that can be
accessed by all subsystems. The below shown picture gives a good impression of this main idea.
The second one determines that each sub-system maintains its own database. Data is interchanged with
other sub-systems by passing messages to them.
Although the first variation is more popular because of its efficiency in sharing large amounts of data, the
decision about the actual model has to be carefully rendered. Because the first variation is a centralised
repository, the sub-systems have to agree to its repository data model. Furthermore, different sub-systems
may have different requirements and the repository model forces the same policy on all sub-systems.
3.2. The client-server model
The client-server model is the standard approach for the distribution of tasks within a network. Tasks are
distributed by servers on different computers and can serve multiple clients. The tasks can either be
standard tasks or special tasks of a program. Within a client-server model, a task is known as a service.
A server is a program that offers such a service. As part of the client-server approach, different programs
are used: the server and the client. The communication between the client and the server depends on the
service itself, in which way data is exchanged. The server is ready to respond at any time to serve any client.
The rules of communication between a service and a client are defined by a protocol and the protocol is
specific to each service.
Clients and servers can run as programs on different computers or on the same machine. In general, the
concept can be expanded into a group of servers, which offer a set of services.
3.3. The layered model
The layered model structures a system into layers where every layer has a specific purpose. The best
example for such a layered model is the OSI reference model of network protocols.
The tasks of communication were divided into seven layers. Each layer has a short description, which states,
what this particular layer has to offer. These requirements must be implemented by the communication
protocols. The actual implementation is not imposed here and can therefore be very different.
The fact that it's possible to replace one layer by another, so long its interface reminds unchanged, is a big
advantage of this model. It's therefore changeable and portable, because you only have to change the inner,
machine-dependent layers to make it work on different machines.
A disadvantage is the increased complexity and a possible performance-loss. Because some requests of the
application have to go through all layers, the communication-level is increased and therefore slower.
Although these are the three basic organisational models, you can always create a new one by composing
existing models into a model, which meets your requirements at best.
4 Modular decomposition styles
Modular decomposition is just a fancy term about dividing your sub-systems into modules. You can use two
main strategies to decompose a sub-system into modules.
The first one is about decomposing your sub-systems into a set of communicating objects and is called
The second one is about decomposing you sub-systems into functional modules that accept input data and
transform it into output data. This approach is called „Function oriented pipelining“. The following sections
will explain those two approaches in more detail.
4.1. Object-oriented decomposition
Object-oriented decomposition structures a system into loosely coupled objects with well-defined
interfaces. Because objects are loosely coupled, the implementation of objects can be modified without
affecting other objects. Despite this advantage, this approach has the disadvantage that it`s difficult to
represent complex entities as objects.
4.2. Function-oriented pipelining
Function-oriented pipeling is a process, where you devide a sub-system into functional transformations,
which take some input and transform that into output. Each processing step is implemented as such a
transform. It may execute sequentially or in parallel. It is a common architecture for data-processing
systems and includes information about the sequence of operations in contrast to the object model,
mentioned before. The function-oriented pipelining is rater understandable by many people and can be
implemented either as a concurrent or a sequential system. The main problem with this approach is that
there has to be a common format for data transfer, which each transformation has to agree with.
5 Control styles
Structural models have the responsibility to decompose a system into sub-systems, but shouldn't be
concerned about the control flow between those sub-systems. At that point control models come into play.
There are two generic control styles, which will be discussed in the next two sections.
5.1. Centralised control
There are mainly two models, which belong to the centralised control approach. The first one is called the
„call-return model“ and the second one the „manager model“. They just differ slightly and have in common
that there is a system controller, which is responsible for managing the execution of other sub-systems.
The call-return model is the well known top-down subrouting model, where control starts at the top of a
subroutine hierarchy and passes to lower levels and then returns to the control of its parent. Its main
domain are sequential systems.
The manager model fits best to concurrent systems. One component is the system manager and
coordinates the other system processes. In this case , a process is a sub-system or a module. Usually the
controller loops continuosly and checks the state of the other sub-system to decide, how to coordinate the
processes. Its main domain are concurrent systems but can also be applied to sequential ones.
5.2. Event-driven control
In contrast to the centralised control model, the event-diven control model works with externally generated
events. In this case, an event may be anything that comes outside the control of the process that handles
the event. Again, there are two different models avaiable. In the first one, an event is broadcasted to all
sub-systems, which have declared interest by registering at an event handler for an specific event. That
approach is called the „broadcast model“.
The second model works with so called „interrupts“. This approach is mainly used in real-time systems,
where a very fast reaction is essential. Each interrupt-type is mapped to a particular interrupt-handler,
which coordinates the different processes. This allows a very fast response but also got some disadvantages
like a limited interrupt-count through hardware-limitations or an increased complexity.
6 Reference architectures
Reference architectures are a great tool to evaluate an architectural design, before it has been deloyed.
Reference models belong to so called „domain-specific architectural models“, which are the common
architectural structures that has been derived from instances of particular application domain architectures.
Those domain-specific architectural models are divided into the reference and the generic models.
The generic models are abstractions from a number of real systems. They just encapsulate the principal
characteristics of these systems.
Reference models are more abstract than that and give an idealised idea of the architecture, which includes
all features that system might offer.
7 Conclusion / Reflection
The result of our report about architectural design is, that it is really important to
design a system first before the software system is implemented, because the design is an
abstract representation of the implemented system, but it hides technical details. Due to that the
developers can get a better overview how the system should look like and use this design to discuss
about the system. The effect would be a better-structured system.
Another reason why it makes sense to design the software before implementing it is that the underlying
structure influences the non-functional requirements a lot (mentioned before). So implementing a
system without having an idea how the system-structure should look like could lead the project into
If the system is implemented by using a good design and the system fits to the customer’s requirements,
the design could be used for later projects too. If the design can be reused, the system maybe is
To implement a system it is important to know how events and data should be transmitted and handled.
The design forces the developers to stick to a certain model (e.g. Event-based-steering). If the design
does not specify the model used in the system, the developers could implement the system in different
and maybe incompatible ways.
The customer needs to know which non-functional requirements the system fulfils. If the system is a
very fast one, the customer should be informed about this. Maybe, the used technologies or the
underlying structure should also be mentioned (in text-form) in order to give the customer the
possibility to get another team to maintain the software. If it is an external team the customer could
hand over this document to the team and the team can faster get into the system.
During the design process of a software system the programmers learn the dependencies between
classes and objects in this project. This helps to get a clear view how the system should be implemented.
In addition to that the programmers do know how the system is planned to be decomposed. So, they
can take care that the system is dividable into subsystems which can work on its own.
The steering-models mentioned in the question above are also part of the design-phase of a software
system. The programmers learn which one should be implemented and can stick to that.
L. Bass – Software Architecture in Practice 2nd ed – Addison-Wesley – 2003
J. Bosch – Design and Use of Software Architectures – Addison Wesley – 2000
M. Shaw and D. Garlan – Software Architecture: Perspectives on an Emerging Discipline – Prentice Hall –
I. Sommerville – Software Engineering 8 – Addison Wesley - 2007