My project complexity model -- one year after I initially had the idea. The purpose is to help Project Managers define what processes and techniques should be applied to their particular project. The model is coded and working, but "not ready for prime time."
My project complexity model -- one year after I initially had the idea. The purpose is to help Project Managers define what processes and techniques should be applied to their particular project. The model is coded and working, but "not ready for prime time."
Applying Systems Thinking to Solve Wicked Problems in Software EngineeringMajed Ayyad
Software systems are essentially socio-technical systems
and they are not isolated from other systems engineering processes. Unconsciously or by intention, we implement systems thinking in multi-agent systems, microservices, DevOps, distributed systems, API-led integrations and lean based software development life cycles. However, the concrete relationship between systems thinking and software engineering is still a green area and barely highlighted as a common practice among software engineers. In this presentation, we will
elaborate how systems thinking helps us to understand the socio-technical aspects of software engineering. We will discuss why systems thinking is important in the field of software engineering, provide examples where it is currently used and show the general areas where systems thinking applies to tackle complex software problems
Mastering the Unpredictable - Improving Knowledge WorkAdaPro GmbH
The well-known management expert Peter F. Drucker said that knowledge worker productivity is the most important value of companies of the 21st Century. More and more companies are realizing that better support for knowledge work is the key factor to create unique value.
Adaptive case management as method and technology to manage unpredictable knowledge worker processes is challenging the status quo to fill this gap. Traditional process management does not fit for knowledge workers, because it is too inflexible. It is like a virtual assembly line. Adaptive Case Management, however, opens the world of ad-hoc workflows and autonomous decisions to process management and thus achieve productivity of knowledge work.
Details: http://www.semigator.de/schulungen/Adaptive-Case-Management-On-Demand-Videos-1403025-0
HCI LAB MANUAL
1
To understand the trouble of interacting with machines - Redesign interfaces of home
appliances.
2 Design a system based on user-centered approach.
3 Understand the principles of good screen design.
4 Redesign existing Graphical User Interface with screen complexity
5 Design Web User Interface based on Gestalt Theory
6 Implementation of Different Kinds of Menus
7 Implementation of Different Kinds of Windows
8 Design a system with proper guidelines for icons
CompTIA exam study guide presentations by instructor Brian Ferrill, PACE-IT (Progressive, Accelerated Certifications for Employment in Information Technology)
"Funded by the Department of Labor, Employment and Training Administration, Grant #TC-23745-12-60-A-53"
Learn more about the PACE-IT Online program: www.edcc.edu/pace-it
Agile Release Planning, Metrics, and RetrospectivesTechWell
How do you compare the productivity and quality you achieve with agile practices with that of traditional waterfall projects? Join Michael Mah to learn about both agile and waterfall metrics and how these metrics behave in real projects. Learn how to use your own data to move from sketches on a whiteboard to create agile project trends on productivity, time-to-market, and defect rates. Using recent, real-world case studies, Michael offers a practical, expert view of agile measurement, showing you these metrics in action on retrospectives and release estimation and planning. In hands-on exercises, learn how to replicate these techniques to make your own comparisons for time, cost, and quality. Working in pairs, calculate productivity metrics using the templates Michael employs in his consulting practice. You can leverage these new metrics to make the case for changing to more agile practices and creating realistic project commitments in your organization. Take back new ways for communicating to key decision makers the value of implementing agile development practices.
Analysis the solution or planning, meeting with the employees, clients, consultants how the product would be better than the competitors. After studying the information we have the choose one of these three option: how can develop the system, how can improve the current system or if any of these two are not possible than leave the system. Planning stage is the preliminary step for a successful system, at first we have to detect the problems how we solve it and what we want to do, the objectives and the resources, required cost etc. System design is the second step in the system. Here a feasibility study is needed to recognize the requirements of the end user’s i.e. customers, what are the expectations of them for the system. It is very vital to maintain a strong communication with the customers. Ensure that the finished product can fulfil its required level and its function. Design phases arise after the good understanding with the customer; it defines the elements of a system, the security level and the different types of data which are necessary for the system. A general system design may be complete with pen-paper work. After the designing phase the system required an implementation process. In the phases the system fulfil the customer promises, now the system is ready to running, training may be required or not. This phase may be takes a long time, that’s depends on the complexity of the system.
Applying Systems Thinking to Solve Wicked Problems in Software EngineeringMajed Ayyad
Software systems are essentially socio-technical systems
and they are not isolated from other systems engineering processes. Unconsciously or by intention, we implement systems thinking in multi-agent systems, microservices, DevOps, distributed systems, API-led integrations and lean based software development life cycles. However, the concrete relationship between systems thinking and software engineering is still a green area and barely highlighted as a common practice among software engineers. In this presentation, we will
elaborate how systems thinking helps us to understand the socio-technical aspects of software engineering. We will discuss why systems thinking is important in the field of software engineering, provide examples where it is currently used and show the general areas where systems thinking applies to tackle complex software problems
Mastering the Unpredictable - Improving Knowledge WorkAdaPro GmbH
The well-known management expert Peter F. Drucker said that knowledge worker productivity is the most important value of companies of the 21st Century. More and more companies are realizing that better support for knowledge work is the key factor to create unique value.
Adaptive case management as method and technology to manage unpredictable knowledge worker processes is challenging the status quo to fill this gap. Traditional process management does not fit for knowledge workers, because it is too inflexible. It is like a virtual assembly line. Adaptive Case Management, however, opens the world of ad-hoc workflows and autonomous decisions to process management and thus achieve productivity of knowledge work.
Details: http://www.semigator.de/schulungen/Adaptive-Case-Management-On-Demand-Videos-1403025-0
HCI LAB MANUAL
1
To understand the trouble of interacting with machines - Redesign interfaces of home
appliances.
2 Design a system based on user-centered approach.
3 Understand the principles of good screen design.
4 Redesign existing Graphical User Interface with screen complexity
5 Design Web User Interface based on Gestalt Theory
6 Implementation of Different Kinds of Menus
7 Implementation of Different Kinds of Windows
8 Design a system with proper guidelines for icons
CompTIA exam study guide presentations by instructor Brian Ferrill, PACE-IT (Progressive, Accelerated Certifications for Employment in Information Technology)
"Funded by the Department of Labor, Employment and Training Administration, Grant #TC-23745-12-60-A-53"
Learn more about the PACE-IT Online program: www.edcc.edu/pace-it
Agile Release Planning, Metrics, and RetrospectivesTechWell
How do you compare the productivity and quality you achieve with agile practices with that of traditional waterfall projects? Join Michael Mah to learn about both agile and waterfall metrics and how these metrics behave in real projects. Learn how to use your own data to move from sketches on a whiteboard to create agile project trends on productivity, time-to-market, and defect rates. Using recent, real-world case studies, Michael offers a practical, expert view of agile measurement, showing you these metrics in action on retrospectives and release estimation and planning. In hands-on exercises, learn how to replicate these techniques to make your own comparisons for time, cost, and quality. Working in pairs, calculate productivity metrics using the templates Michael employs in his consulting practice. You can leverage these new metrics to make the case for changing to more agile practices and creating realistic project commitments in your organization. Take back new ways for communicating to key decision makers the value of implementing agile development practices.
Analysis the solution or planning, meeting with the employees, clients, consultants how the product would be better than the competitors. After studying the information we have the choose one of these three option: how can develop the system, how can improve the current system or if any of these two are not possible than leave the system. Planning stage is the preliminary step for a successful system, at first we have to detect the problems how we solve it and what we want to do, the objectives and the resources, required cost etc. System design is the second step in the system. Here a feasibility study is needed to recognize the requirements of the end user’s i.e. customers, what are the expectations of them for the system. It is very vital to maintain a strong communication with the customers. Ensure that the finished product can fulfil its required level and its function. Design phases arise after the good understanding with the customer; it defines the elements of a system, the security level and the different types of data which are necessary for the system. A general system design may be complete with pen-paper work. After the designing phase the system required an implementation process. In the phases the system fulfil the customer promises, now the system is ready to running, training may be required or not. This phase may be takes a long time, that’s depends on the complexity of the system.
Online job placement system project report.pdfKamal Acharya
Our project Expert.Com Job Placement System has been designed to help the millions of unemployed youth to get in touch with the major companies which would help them in getting the right kind of jobs and would also help the companies to get the appropriate candidates for appropriate jobs.
In this article it is told that how MD Anderson cancer centre launched “moon shot” program which costs them too much almost about 62million dollars and cognition based approach hold on because cost too much and the Cognitive technology as it copy the function that human brain performs in multiple ways such as to recognize patterns and to process language
Different projects they launched but some companies want to bring dramatic changes through AI which is not possible but low honey fruits which are easy to develop are operate successfully
PROBLEMS ARE THE GOLDEN EGGS
problems??? day by day in our proffessional life we faces so many problems, but didn't recognize about the problem. Because we are habituate to facing to problems, if we want to solve the problems, first we can feel YES am facing a problem then you have a chance to solve it... after that we should find is it REPEATATIVE problem or New problem, on the bases of the issue we can take further steps, how to break it. how to analyse, how to find countermeasure, how to check is it suitable or not, how to make standard
In today’s global marketplace, successful companies must be able to integrate and quickly view quality audit information from their manufacturing sites all over the world. This strategic capability has become even more important as manufacturers have moved offshore and have become more complex. The value and immediacy of quality assurance data is a critical element to the survival of competitive manufacturing organizations. Software systems can address these issues.
Source:
Lyons Information Systems, Inc
http://www.lyonsinfo.com
Keene Systems latest whitepaper release simplifies the process of planning a software project by comparing it with the phases of building a house. To simplify it even further, Keene also developed a clever infographic that visually walks the viewer through the 10 step process with a conversation between a construction worker and a programmer.
Four major causes of difficulty in gathering system requirement and business requirements, Reasons projects were
abandoned.Three Generations of System Development:1. Direct Contact 2. Business Analyst 3.Team Based.
The Basics of Process Automation – A White Paper from WorkiQOpenConnect
This short whitepaper from WorkiQ, explains how different methods of process automation can improve operational efficiency within your company.
You will learn:
• How to identify opportunities for automation with high ROI
• The three approaches to automation (macros, scripts, and robots)
• How to measure the impact of automation projects
To learn more about process automation and workforce analytics, please visit www.workiq.com
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
1. Systems Development Life Cycle
When considering the systems development life cycle it is important to
think of the different stages as a continually developing process rather
than each stage being an end in itself.
When one of the stages is completed the following stages may mean that the previous ones may
need to be considered again. If the problem definition agreed with the end user later proves to be
impossible to implement then the problem will have to be redefined. If the required input proves
to be not feasible then it may be necessary to alter the expected output. If, during the training of
the staff, one of the input operators proves to be colour blind then the input screens may need to
be altered in order to not have any clashes between red and green.
This reliance of each stage being on the following stages as well as the previous ones means that
the process is said to be iterative.
1.7.b Problem Definition
It is important from the outset to ensure that when a computer system is being designed all those
that are involved are agreed about the aims of the system.
There will normally be a person, or a company or organisation, that decides that a task would
benefit from computerisation. This belief normally arises because there is a problem which
cannot be solved by previously used methods or similar problems seem to have been solved by
computerisation elsewhere. Unfortunately, the owner of the problem probably understands the
problem itself quite well, but does not understand the consequences of using computers to try to
solve the problem, or, indeed, whether such a solution is even possible. For this reason it is
necessary for the organisation to employ a specialist who does understand computers and
computerised solutions to problems. This is the systems analyst. Unfortunately, it is probable that
the systems analyst is not expert in the area of the problem.
The system analyst’s job is to solve the problem by planning and overseeing the introduction of
computer technologies. The owner of the problem will be happy if the analyst can introduce a
computer solution. The problem arises when the analyst, who doesn’t know very much about the
business, solves the problem that they think needs solving, while the owner of the problem
expects a very different problem to be solved.
It seems obvious that if someone is doing something for someone else that they should make sure
that they know what is required. If you are asked to answer questions 1 to 5 and you do 6 to 10
instead there is an obvious breakdown in communication, but this is what can happen if you only
listen to the instruction, “Do the 5 questions”. The method most often used to overcome this
problem is for there to be discussions between all the interested parties, and then for a list of
objectives to be written up. This list of objectives, if they are all solved, will be the solution to
the problem. The different people involved then agree to the list of the objectives and the success
or otherwise of the project depends on the completion of those objectives.
2. The definition of the problem is the most important part of the analysis because if it is not done
correctly the wrong problem may be solved.
1.7.c A Feasibility Study.
When the organisation and the systems analyst have agreed the definition of the problem, a
decision must be made about the value of continuing with the computerised solution. The
organisation may be convinced that there is a problem to be solved, and that its solution will be
worth the effort and the expense. However, the systems analyst is being paid to look at the
problem, and its solution, from another point of view. The analyst is an expert in computer
systems and what is possible with computer systems. This analyst must consider the problem
from the point of view of the computerised part of the solution and make a report to the
organisation saying whether the solution is possible and sensible. This report is called the
feasibility study because it says whether the solution is feasible.
The feasibility study will consider the problem, and the proposed solution, from a number of
points of view.
Is the solution technically possible? A car firm may decide that if robots are used on the
production line to assemble the different parts of engines then quality of work may improve.
However, if it is not possible to manufacture a robot arm of the correct dimensions to fit into
the small areas under a car bonnet, it doesn’t matter how big an improvement there would be
in the quality of the finished product, it would simply not be feasible.
Is the solution economic to produce? Perhaps the robots exist that can be programmed to
assemble this engine and the benefits will be worthwhile. However, if the cost of the robots
and the program to control them is so great that it puts the car manufacturer out of business,
the introduction of the robots is not feasible.
Is the solution economic to run? It has been decided that there are tremendous benefits in
producing a robot production line and the robots are very cheap to buy, but they have so
many electric motors, and they are so slow at assembling the engines that it is cheaper to
employ people to do the job.
What is the effect on the human beings involved? If the car plant is the only major employer
in the region, and the introduction will put much of the workforce out of work, then the
employer may decide that the human cost is too great, certainly the Government would.
Is the workforce skilled enough? It will be necessary to employ highly skilled technicians to
look after the robots. If there are no workers with the necessary skills then the
computerisation is not feasible.
What effect will there be on the customer? If the customer is likely to be impressed with the
introduction of the new systems then it may be worth the expense, however, if the customer
will notice no difference, what is the point?
Will the introduction of the new systems be economically beneficial? Put bluntly, will the
introduction of robots increase the profits made by the firm?
If the feasibility study shows a positive result which is accepted by the company, the next stage
will be for the analyst to collect as much information about the process as possible.
3. 1.7.d InformationRequirements of a System and Fact Finding
When a computer system is being designed it is necessary to ensure that the designer of the
system finds out as much as possible about the requirements of the system. We have already
mentioned the importance of defining the problem but further difficulties can arise. Imagine an
organisation that commissions an analyst to design a new payroll program. The analyst and the
boss of the organisation agree that the software needs to be able to communicate with the
relevant banks, and to deduct tax at source and other important details. The analyst designs the
solution and the software is written. It is the best payroll program ever produced and is installed
into the computer system ready to do the first monthly payroll. Unfortunately, many of the
workforce are employed on a weekly basis. A simple question to one of those paid in this way
would have highlighted this for the analyst and meant that the costly mistake was avoided. When
a system’s requirements are being studied there is no room for errors caused by lack of
information.
The feasibility study has been accepted so the analyst now needs to collect as much information
as possible. Obvious methods of collecting information are to ask different types of people. It
may not be feasible to ask everyone in the organisation for their views, but a representative
sample of workers, management, customers should be given the chance to supply information.
After all, the part time worker on the production line probably knows more about the business
than the analyst. There are three accepted methods of finding out people’s views:
1. By interview. Interviews are particularly important because they allow the interviewee to talk
at length, and also to leave a prepared script. However, they are very time consuming and
consequently restricting in the number of people whose views can be sought.
2. By using questionnaires. Questionnaires make it possible to find out the views of a large
number of people very quickly, but because the questions are pre-determined the person who is
supplying the answers may find difficulty in putting their point of view across.
3. A compromise is to hold a group meeting. This allows a number of people to discuss points
and make their views known and yet cuts down on the amount of time spent in interviews getting
the same answers over and over again. The problem with meetings is that one or two people tend
to dominate, not allowing all the members of the group to give their opinions.
Often the views of the people connected with the problem are clouded by years of familiarity, so
it is important for the analyst to also gain first hand knowledge of any existing systems. One way
to do this is to observe the current system in action. Care must be taken to take into account the
fact that if the workers know that they are being observed it is unlikely that they will behave in
their normal manner. This is the same effect as a television camera has on otherwise fairly
sensible individuals. The second method is to collect printed documentation and study it in order
to find out what data is required by the system and what form information is output.
1.7.e Requirements Specifications
The planning of any system design must start by deciding what the requirements are of the
system. A system may need to store data for future reference or processing. However, simply
being aware that the system may need to store data is not enough.
Decisions need to be made about the types of data to be held as this will dictate the form that
the data will be stored in and the amount of storage space required for each set of
information.
4. Calculations as to the number of sets of data that are going to be held have to be made
because the volume of storage will make some storage devices more sensible than others.
Also, the volume of storage can effect the structures that will be used to store the data.
Decisions need to be made about the relative importance of the different ways of accessing
the data. Is it going to be necessary to access individual items of data or will all the data be
accessed at the same time?
Will the data be changed regularly or is it fairly static?
When decisions are made as to the direction that a particular system is going to take, it is normal
to produce a list of tasks that need to be carried out to complete the system. These tasks should
be in a specific order. It would not make sense to consider inputting data into the system before
designing the data structure into which the data is to be put. This list is called a priority list and it
forms the basis of the system design. Each of the tasks that are in the list should then be
considered separately to decide the important points about each one. An example would be that
the file of information on stock being held in a business must allow
for 1000 items (immediately putting a lower limit on the size of appropriate storage)
for direct searching for information on a single item (means that some storage media are not
suitable, and that some data structures cannot be used)
another file, of manufacturers, to be accessed from each record in the stock file to facilitate
reordering (forces one field to act as a link between the two files).
Plenty of other facts must be considered and decisions made, but a set of constraints has already
been placed on the design. This is known as the design specification and it should be agreed
between the analyst and the organisation before the design is implemented.
Often, the inputs and outputs to a system and the storage and processing that goes on in the
system is such that, in order to understand it, the system needs to be divided up into a number of
interconnected sections, or modules, each one being simple enough to be handled as an entity. A
diagram can be drawn which shows all the sections and how they fit together. Such a diagram is
called a Jackson diagram.
Jackson diagrams:
A Jackson diagram starts with the original problem as the highest level. The next, and
subsequent, levels show how the problems in the previous levels are split up to form smaller,
more manageable, problems. This continues until each of the blocks in the lowest levels are self
contained, easily solvable problems. These individual problems can then be solved and combined
according to the links that have been used. If the links between the different blocks are then used
correctly, the result will be a solution to the original problem. Imagine a Jackson diagram as a
method for showing the individual modules that go to make up a solution to a problem and the
relationships between them.
Original Problem
Links
Individual modules
Final level
5. The links between the modules can be conditional. For example if the result of a module was a
mark out of 100, then there could be two possible print routines, one for failure if the mark was
below a certain level, and one for pass if it was above the pass mark.
An example of a Jackson diagram to show the solution to a simple problem.
A register is taken of the 25 students in a class. Each student can be either present or absent. It is
necessary to print out the number of students who are present and the number absent.
*
repeat
Data flow diagrams:
These are diagrams that are used to describe systems. Boxes are used to stand for input,
processes, storage, and output. Arrows show the direction of communication around the system,
and communications outside the system. As the name implies, these diagrams are used to show
the directions of flow of data from one part of a system to another. Data flow diagrams can have
complex shapes for the boxes that are used, but the important thing is not the shapes of the
boxes, rather the logical train of thought that has been used to produce the diagram. Rectangular
boxes, though not always used, are perfectly acceptable for all elements in a data flow diagram.
Such diagrams are intended to show how the processes and the data interrelate, not the details of
any of the programming logic.
Note: A Systems Flowchart is another name for a data flow diagram.
Students are expected to be able to follow the logic in a data flow diagram and, indeed, produce
their own, but not under examination conditions because the drawing of such a diagram would
take too much of the time allowed for the exam.
At this stage the analyst will also identify the hardware that will be necessary for the system to
operate as the user requires, and the software characteristics that will need to be satisfied for the
system to operate.
Register
Input
Present or
absent
OutputWhile count
<25 *
Initialise
Count = 0
Cpresent = 0
Cabsent=0
Increment
Cpresent
Increment
Cabsent
Output
Cpresent
Cabsent
6. Design of Input, Output, and Data Structures
Input Design
All systems require input. The way that the data is input to the system depends on a number of
factors.
The data that is required. Is it graphical/textual/physical in nature? Is the data already in
existence or does it need to be collected first?
The hardware that is available. Is the data to be entered via a keyboard by an operator, or is
there an automatic way to enter the data?
The experience of the operator.
The design of the user interface.
This section relates back to section 1.2.c which covered the different forms of software interface.
The main task in input design is to design the interface with the outside world so that data can be
passed to the system, both software and hardware.
Diagrammatic depiction of the system processing is covered in section 1.7.e, above. It is not
necessary to go into great detail about such diagrams, but students should be aware that they
exist.
Output Design.
The results that are produced by the system must be presented in a way that is appropriate for the
application. If the system is designed to produce bank statements for customers, then it would
not be sensible to have an audio output. Similarly, a burglar alarm system would not serve the
purpose for which it had been designed if the output is a message on a computer screen saying
that someone has broken in as there is probably no-one in the house to read it, which is why the
alarm was needed in the first place.
The decision about the type of output will depend greatly upon the same factors as the input,
namely, the hardware available, the form that the output needs to be in, the experience of the
operator, indeed, whether there will be an operator present.
Equally important to giving enough information, is the danger of providing too much. In order
for users to be able to understand the information presented various tricks can be used.
Information can be arranged on a monitor screen by always putting the same sort of
information in the same place. The operator will then quickly become accustomed to the
relative importance of different areas of the screen.
Information can be colour coded. Important information may appear in red while that which
is less important is in black. Notice that this implies some form of decision making on the
part of the processor to determine what is important in the first place. It is also necessary to
be very careful about choice of colours. People who are colour blind commonly find
difficulty distinguishing between red and green, so choosing these colours to stand for
information that is dangerously near a limit or at a safe level, could be disastrously wrong.
Similarly, colour combinations have to be chosen carefully. Blue writing on a black
background is almost impossible to see, as is yellow writing in some lighting conditions.
Video reversal can be used to highlight a particular piece of information effectively. This is
when the normal writing on the screen is black on a white background, but the piece that
needs to stand out is shown as white on a black background.
7. Very important pieces of information may be shown as a dialogue box obscuring the rest of
the screen until it is dealt with.
A printer may be reserved for special messages so that a hard copy of the information is
preserved. Again, the fact that the information appears on that printer means that it has a
particular importance.
Information can be made to flash, or can be printed in a different size, anything that makes
the operator’s eye go to that part of the screen.
Data structure design.
The data used in a computer solution will need to be stored somewhere. The storage is not that
important. What is important is getting it back again when it is needed. In section 1.4, the data
structures array, queue, stack, linked list, were described. When the solution is designed it is
necessary to decide how access to the data will be handled and to choose an appropriate data
structure. Questions on this part of the syllabus will tend to ask for the sort of choices that need
to be made rather than any complex analysis of fitting a data structure to a given situation,
though simple examples like modelling a queue at a check out till may be asked.
1.7.f System Evaluation
Any system must match certain criteria if it is to be considered successful. This does not only
apply to a computer system but any system. For example, a car must satisfy various criteria
before being able to be called a car
it must move under its own power
it must be possible to steer it
it must have 3 or 4 wheels
it must have seats.
There would be many other facts which would have to be true before most of us would recognise
that this system is a car. However, some assumptions have been made in just the four criteria
mentioned here. A toy, radio controlled, car fits all these criteria, but it was not the sort of system
that we had in mind when designing the criteria. Perhaps the second bullet point should have
specified that it has to be controlled from within the car itself, or should there be a new criteria
that gives a minimum size for the vehicle. When systems are being designed, this list of criteria,
that the finished system must match, is of paramount importance. It must also be agreed between
the designer of the system and the commissioner of the design, so that there will be no
unfortunate misunderstandings. We can imagine a situation where the Ford motor company
commission a new car design, only to find that it is 30 cms long when delivered.
Note that these criteria are decided before the system is created. They are used to decide how
well the system works. In other words, does it do what it is meant to? This question can only be
answered if it is known what the system was meant to do in the first place.
1.7.g Documentation
The documentation of a system consists of all the text and graphics that explain how the system
was produced, how it should be used, and how it can be maintained.
Documentation is created at different stages in the life of a system and for different people to
use. Some of the different types of documentation are met in this section and others will be
encountered later in the course. Indeed, much of the project work that is done in module 4
consists of producing the documentation for a problem solution.
8. Requirements specification
This is a list of the requirements of the customer for whom the system is being designed. It
consists, largely, of the criteria that will be used for the evaluation of the finished system that
were discussed in section 1.7.f. It is usual for the system analyst and the customer to sign the list
of requirements so that there is no confusion when the work is finished.
Design specification
Taking the requirements specification and working out the stages necessary to produce the
required end product is known as the design specification. This will include the different stages,
often shown in diagrammatic form as mentioned earlier in this chapter, and also the criteria for
each stage of the solution. For example, one part of a solution may be the production of a file of
data. The ways that this file of data relates to the other parts of the system, and the specification
of the file (what is the key field? How many records will there be? What type of medium should
it be stored on?) make up the design specification.
Program specifications
These will include detailed algorithms showing the method of solution for some of the parts of
the problem. These algorithms may well be in the form of flow diagrams or pseudo code. The
language to be used, the data structures necessary, and details of any library routines to be used
will also be in the program specification.
Technical documentation
The technical documentation will include the program specifications, the coded program itself,
details of hardware configurations. Generally, anything that will aid a technician in maintaining
or updating a system. Indeed, technical documentation is otherwise known as maintenance
documentation. This type of documentation is not intended to be accessible to the user of the
system, who does not need any of these details to be able to use the system correctly.
User documentation
This is the documentation for the person who will actually be using the system. It contains those
details that are needed to make the system operate as it should. Items normally included in the
user documentation will be
examples of output screens
examples of valid input
methods of input of data
error messages and what to do when they appear
How to install the system onto specific hardware
Some user documentation is part of the software and can be called up onto the screen when it is
needed. This type of documentation is called on-screen help.
1.7.h Testing and ImplementationPlanning
Any system needs to be tested to ensure that it works. This seems to be a fairly obvious
statement, but in reality such testing is impossible in all but the simplest of systems because it
simply is not possible to test every conceivable input to, or logical construction in, the system.
This difficulty means that testing that is to be done must be carefully planned and that it should
relate directly to the criteria referred to earlier in this chapter. Specific types of testing have
already been covered in 1.3.g.
9. When the system has been completed it has to be implemented so that it is performing the tasks
for which it was designed. Initially, this involves
ensuring that the correct hardware is available
arranging for staff to be trained in the use of the new system
inputting the data to the data files, either manually or by downloading them from the original
system.
The system handover, itself can be done in a number of ways:
Parallel running. Until the system can be considered fault free, the old and new systems are
run side by side, both doing the same processing. This allows results to be compared to
ensure that there is no problem with the new system. Such a system is ‘safe’ and also allows
staff training to be carried out, but it is obviously very expensive because of the need to do
everything twice. Parallel running is used in situations where the data is so valuable that there
must be no possibility of failure.
Pilot running. Key parts of the new system are run alongside the old system until it is
considered that they have been fully tested. This is a compromise with the idea of parallel
running, but it does not give a clear idea of the effects on the system of the large amounts of
data that are going to be encountered in the full application.
Big bang, or direct change. The old system is removed and the new system replaces it
completely and immediately.
Phasing. Parts of a system are replaced while the remaining parts are covered by the old
system. This allows for some testing of the new system to be done, and for staff training to
take place, but also allows for a back-up position if the new version does not work as
anticipated.
This topic is not fully expounded upon here, the candidate simply needing to understand that
there are different methods of implementation and be able to outline some of the methods along
the lines shown here. This topic will reappear for a fuller treatment in section 3.8.e.
1.7.i System Maintenance and the Software Life Span
Systems may be designed for a well defined purpose and may realise that purpose, hence they
would be considered successful. However, the original reasons for a particular system to be
created may change, the hardware may alter, the law governing a system may change. For many
reasons the original system may no longer be satisfactory. One solution would be to produce a
totally new system, another would be to adapt the present one so that it can be used in the new
circumstances. This situation is reviewing the system and is the main reason for technical
documentation.
While the system is running it will need attention because of faults being discovered, this again
needs a technician with a set of maintenance documentation.
Computing and computer applications is a subject that changes so regularly through
improvements in technology, new ideas, different legal frameworks trying to keep pace, that, in
reality, a system should never be considered to be finished. Rather than being a linear process
with a beginning, a middle and an end, it should be thought of as a circular process, continually
returning to previous stages to fine tune them and take advantage of changing circumstances.