COCOMO Model
Key parameters which define the quality of any software
Modes of development
Boehm’s definition of systems
Types of Models
Advantages
disadvantages
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.
Basic COCOMO compute software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC, KLOC).
COCOMO Model
Key parameters which define the quality of any software
Modes of development
Boehm’s definition of systems
Types of Models
Advantages
disadvantages
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.
Basic COCOMO compute software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC, KLOC).
Function Point Analysis & Cocomo. Two main estimation methods for structured and object oriented methodology estimations. Cocomo is widely used in estimating where Rational Unified Process is followed.
The importance of software since there is were the motivation for software engineering lies and then and introduction to software engineering mentioning the concept and stages of development and working in teams
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. Introduction
COCOMO is one of the most widely used software
estimation models in the world
It was developed by Barry Boehm in 1981
COCOMO predicts the effort and schedule for a software
product development based on inputs relating to the size of
the software and a number of cost drivers that affect
productivity
3. COCOMO Models
COCOMO has three different models that reflect the
complexity:
the Basic Model
the Intermediate Model
and the Detailed Model
4. The Development Modes: Project
Characteristics
Organic Mode
• Relatively small, simple software projects
• Small teams with good application experience work to a
set of less than rigid requirements
• Similar to the previously developed projects
• relatively small and requires little innovation
Semidetached Mode
• Intermediate (in size and complexity) software projects in
which teams with mixed experience levels must meet a mix
of rigid and less than rigid requirements.
5. Contd…
Embedded Mode
• Software projects that must be developed within a set of tight
hardware, software, and operational constraints.
6. COCOMO:
Some Assumptions
Primary cost driver is the number of Delivered Source
Instructions (DSI) / Delivered Line Of Code
developed by the project
COCOMO estimates assume that the project will enjoy
good management by both the developer and the
customer
Assumes the requirements specification is not
substantially changed after the plans and requirements
phase
7. Basic COCOMO
Basic COCOMO is good for quick, early, rough order of
magnitude estimates of software costs
It does not account for differences in hardware
constraints, personnel quality and experience, use of
modern tools and techniques, and other project attributes
known to have a significant influence on software costs,
which limits its accuracy
8. Basic COCOMO Model: Formula
E=ab (KLOC or KDSI) b
b
D=cb(E) d
b
P=E/D
where E is the effort applied in person-months, D is the
development time in chronological months, KLOC / KDSI
is the estimated number of delivered lines of code for the
project (expressed in thousands), and P is the number of
people required. The coefficients ab, bb, cb and db are given in
next slide.
11. Basic COCOMO Model: Limitation
Its accuracy is necessarily limited because of its lack of
factors which have a significant influence on software costs
The Basic COCOMO estimates are within a factor of 1.3
only 29% of the time, and within a factor of 2 only 60% of
the time
12. Basic COCOMO Model: Example
We have determined our project fits the characteristics of Semi-Detached
mode
We estimate our project will have 32,000 Delivered Source Instructions.
Using the formulas, we can estimate:
Effort = 3.0*(32) 1.12
= 146 man-months
Schedule = 2.5*(146) 0.35
= 14 months
Productivity = 32,000 DSI / 146 MM
= 219 DSI/MM
Average Staffing = 146 MM /14 months
= 10 FSP
13. Function Point Analysis
What is Function Point Analysis (FPA)?
It is designed to estimate and measure the time, and thereby the cost, of
developing new software applications and maintaining existing software
applications.
It is also useful in comparing and highlighting opportunities for productivity
improvements in software development.
It was developed by A.J. Albrecht of the IBM Corporation in the early 1980s.
The main other approach used for measuring the size, and therefore the time
required, of software project is lines of code (LOC) – which has a number of
inherent problems.
14. Function Point Analysis
How is Function Point Analysis done?
Working from the project design specifications, the following
system functions are measured (counted):
Inputs
Outputs
Files
Inquires
Interfaces
15. Function Point Analysis
These function-point counts are then weighed (multiplied) by
their degree of complexity:
Simple Average Complex
Inputs 2 4 6
Outputs 3 5 7
Files 5 10 15
Inquires 2 4 6
Interfaces 4 7 10
16. Function Point Analysis
A simple example:
inputs
3 simple X 2 = 6
4 average X 4 = 16
1 complex X 6 = 6
outputs
6 average X 5 = 30
2 complex X 7 = 14
files
5 complex X 15 = 75
inquiries
8 average X 4 = 32
interfaces
3 average X 7 = 21
4 complex X 10 = 40
Unadjusted function points 240
17. Function Point Analysis
In addition to these individually weighted function points, there are factors that affect
the project and/or system as a whole. There are a number (~35) of these factors
that affect the size of the project effort, and each is ranked from “0”- no influence to
“5”- essential.
The following are some examples of these factors:
Is high performance critical?
Is the internal processing complex?
Is the system to be used in multiple sites and/or by multiple organizations?
Is the code designed to be reusable?
Is the processing to be distributed?
and so forth . . .
18. Function Point Analysis
Continuing our example . . .
Complex internal processing = 3
Code to be reusable = 2
High performance = 4
Multiple sites = 3
Distributed processing = 5
Project adjustment factor = 17
Adjustment calculation:
Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor X 0.01)]
= 240 X [0.65 + ( 17 X 0.01)]
= 240 X [0.82]
= 197 Adjusted function points
19. Function Point Analysis
But how long will the project take and how much will it cost?
As previously measured, programmers in our organization
average 18 function points per month. Thus . . .
197 FP divided by 18 = 11 man-months
If the average programmer is paid $5,200 per month
(including benefits), then the [labor] cost of the project will
be . . .
11 man-months X $5,200 = $57,200
20. Function Point Analysis
Because function point analysis is independent of language used,
development platform, etc. it can be used to identify the
productivity benefits of . . .
One programming language over another
One development platform over another
One development methodology over another
One programming department over another
Before-and-after gains in investing in programmer training
And so forth . . .
21. Function Point Analysis
But there are problems and criticisms:
Function point counts are affected by project size
Difficult to apply to massively distributed systems or to systems with very
complex internal processing
Difficult to define logical files from physical files
The validity of the weights that Albrecht established – and the consistency
of their application – has been challenged
Different companies will calculate function points slightly different, making
intercompany comparisons questionable