The document proposes an approach called APPEAR for predicting software performance in component-based systems. APPEAR uses both structural and statistical modeling techniques. It consists of two main parts: (1) calibrating a statistical regression model by measuring performance of existing applications, and (2) using the calibrated model to predict performance of new applications. Both parts are based on a model that describes relevant execution properties in terms of a "signature". The method supports flexible choice of parts modeled structurally versus statistically. It is being validated on two industrial case studies.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
Recovering a software architecture from source is challenging. Automated methods generally provide architectural descriptions which are not very useful. Manual architecture recovery methods are very labour intensive. SyMAR is a method for manual software architecture recovery which aims to efficiently extract a software architecture description from vertical slices through the software system.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
Performance Evaluation using Blackboard Technique in Software ArchitectureEditor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional
requirements (such as performance). Whereas establishing the non-functional requirements have significant effect on success of
software systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software
performance has been specified based on performance models, may be evaluated at the primary stages of software development cycle.
Therefore, modeling and evaluation of non-functional requirements in software architecture level, that are designed at the primary
stages of software systems development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance of software systems, based on black board technique in software architecture
level. In this approach, at first, software architecture using blackboard technique is described by UML use case, activity and
component diagrams. then UML model is transformed to an executable model based on timed colored petri nets(TCPN)
Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including performance
(such as response time) may be evaluated in software architecture level.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
Recovering a software architecture from source is challenging. Automated methods generally provide architectural descriptions which are not very useful. Manual architecture recovery methods are very labour intensive. SyMAR is a method for manual software architecture recovery which aims to efficiently extract a software architecture description from vertical slices through the software system.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
The presentation was prepared and delivered to fulfill the curriculum requirement of PROTON business school, Indore in 2nd trimester of IT & System group.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Model based analysis of wireless sys...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Software metricsIntroduction
Attributes of Software Metrics
Activities of a Measurement Process
Types
Normalization of Metrics
Help software engineers to gain insight into the design and construction of the software
Activities of a Measurement Process
To answer this we need to know the size & complexity of the projects.
But if we normalize the measures, it is possible to compare the two
For normalization we have 2 ways-
Size-Oriented Metrics
Function Oriented Metrics
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
The presentation was prepared and delivered to fulfill the curriculum requirement of PROTON business school, Indore in 2nd trimester of IT & System group.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Availability Assessment of Software Systems Architecture Using Formal ModelsEditor IJCATR
There has been a significant effort to analyze, design and implement the information systems to process the information and data, and solve various problems. On the one hand, complexity of the contemporary systems, and eye-catching increase in the variety and volume of information has led to great number of the components and elements, and more complex structure and organization of the information systems. On the other hand, it is necessary to develop the systems which meet all of the stakeholders' functional and non-functional requirements. Considering the fact that evaluation and assessment of the aforementioned requirements - prior to the design and implementation phases - will consume less time and reduce costs, the best time to measure the evaluable behavior of the system is when its software architecture is provided. One of the ways to evaluate the architecture of software is creation of an executable model of architecture.
The present research used availability assessment and took repair, maintenance and accident time parameters into consideration. Failures of software and hardware components have been considered in the architecture of software systems. To describe the architecture easily, the authors used Unified Modeling Language (UML). However, due to the informality of UML, they utilized Colored Petri Nets (CPN) for assessment too. Eventually, the researchers evaluated a CPN-based executable model of architecture through CPN-Tools.
DOTNET 2013 IEEE MOBILECOMPUTING PROJECT Model based analysis of wireless sys...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
HW/SW Partitioning Approach on Reconfigurable Multimedia System on ChipCSCJournals
Due to the complexity and the high performance requirement of multimedia applications, the design of embedded systems is the subject of different types of design constraints such as execution time, time to market, energy consumption, etc. Some approaches of joint software/hardware design (Co-design) were proposed in order to help the designer to seek an adequacy between applications and architecture that satisfies the different design constraints. This paper presents a new methodology for hardware/software partitioning on reconfigurable multimedia system on chip, based on dynamic and static steps. The first one uses the dynamic profiling and the second one uses the design trotter tools. The validation of our approach is made through 3D image synthesis.
Software metricsIntroduction
Attributes of Software Metrics
Activities of a Measurement Process
Types
Normalization of Metrics
Help software engineers to gain insight into the design and construction of the software
Activities of a Measurement Process
To answer this we need to know the size & complexity of the projects.
But if we normalize the measures, it is possible to compare the two
For normalization we have 2 ways-
Size-Oriented Metrics
Function Oriented Metrics
The International Journal of Engineering and Science (IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Modeling and Evaluation of Performance and Reliability of Component-based So...Editor IJCATR
Validation of software systems is very useful at the primary stages of their development cycle. Evaluation of functional
requirements is supported by clear and appropriate approaches, but there is no similar strategy for evaluation of non-functional requirements
(such as performance and reliability). Whereas establishing the non-functional requirements have significant effect on success of software
systems, therefore considerable necessities are needed for evaluation of non-functional requirements. Also, if the software performance has
been specified based on performance models, may be evaluated at the primary stages of software development cycle. Therefore, modeling
and evaluation of non-functional requirements in software architecture level, that are designed at the primary stages of software systems
development cycle and prior to implementation, will be very effective.
We propose an approach for evaluate the performance and reliability of software systems, based on formal models (hierarchical timed
colored petri nets) in software architecture level. In this approach, the software architecture is described by UML use case, activity and
component diagrams, then UML model is transformed to an executable model based on hierarchical timed colored petri nets (HTCPN) by a
proposed algorithm. Consequently, upon execution of an executive model and analysis of its results, non-functional requirements including
performance (such as response time) and reliability may be evaluated in software architecture level.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Performance prediction for software architectures
1. Performance Prediction for Software
Architectures
Evgeni Eskenazi, Alexandre Fioukov, Dieter K. Hammer
Department of Mathematics and Computing Science, Eindhoven University of Technology,
Postbox 513, 5600 MB Eindhoven, The Netherlands
Phone: +31 (0) 40 – 247
E-mail: { e.m.eskenazi, a.v.fioukov, d.k.hammer }@tue.nl
Abstract— The quantitative evaluation of certain quality i.e. decomposition of the architecture into smaller parts
attributes— performance, timeliness, and reliability— is and reasoning about the necessary quality aspects,
important for component-based embedded systems. We starting from the lowest level. We started by
propose an approach for the performance estimation of
investigating how far we could push the limits of state-
component-based software that forms a product family.
The proposed approach, Analysis and Prediction of of-the-art analytic methods. Since we did this
Performance for Evolving Architectures (APPEAR), investigation in an industrial setting, we quickly ran
employs both structural and stochastic modeling against the following basic limitations:
techniques. The former are used to reason about the • Analytical methods are based on an analysis of all
properties of components, while the latter allow one to possible execution traces. The high complexity of
abstract from irrelevant details of the execution
software for product families, with hundreds of
architecture. The method consists of two main parts: (1)
calibrating a statistical regression model by measuring parameters influencing the software qualities,
existing applications and (2) using the calibrated model to causes these approaches to fail. Accounting for all
predict the performance of new applications. Both parts performance critical details of the software and
are based on a model of the application in order to attempting to reason in an analytical way leads to
describe relevant execution properties in terms of a so- a combinatorial explosion.
called signature. A predictor that is determined by
statistical regression techniques is used to relate the values
• Analytical methods are mainly suitable for
of the signature to the observed or predicted performance investigating the WCET/BCET (Worst-Case/Best-
measures. APPEAR supports the flexible choice of Case Execution Time). Especially the WCET is
software parts that need structural modeling and ones that important for safety-critical systems. However,
statistical modeling. Thereby it is assumed that the latter for a typical consumer electronics or professional
are not seriously modified during the software evolution. system, the architects are usually more interested
The suggested approach is being validated with two
in the average performance, because the limits
industrial case studies in the Consumer Electronics and
Professional Systems domain. might not be representative at all.
• Analytical methods rely on models of the
Keywords— Performance prediction; Embedded hardware resources to estimate the WCET/BCET.
Systems; Software architecture Due to the non-determinism in modern computing
facilities— caches, pipelines, and branch
predictors— the analytical methods often result in
I. INTRODUCTION
over-pessimistic estimates.
During the past years, the complexity and the amount One of the possible solutions for the aforementioned
of software, especially within product families for problems can be the use of the statistical techniques like
embedded systems, has grown significantly. regression analysis. Such an analysis allows one to
Unfortunately, many existing approaches turned out to construct a statistical predictor, based on the
be not suitable for the evaluation of the quality attributes measurements on the existing parts of the software, and
(e.g. performance) of the entire software system. use it for the prediction of the quality attributes of newly
This also holds for the popular analytical approaches, developed parts. The use of regression techniques for
2. software performance prediction is a promising evaluation of software performance. In [6], an approach
direction, since less and less software is created from for the generation of Petri nets from UML collaboration-
scratch. There always exists an initial software stack and statechart-diagrams is proposed. These Petri nets are
(reusable components, previous versions, etc.) that can then used to estimate different performance
be used for measurements and predictor training. characteristics.
The statistical approach abstracts from the details of An example for the use of the regression techniques is
the system. However, this abstraction can cause other presented in [4]. In this approach, the results of software
problems like decreased accuracy of the prediction and profiling are used for the prediction of software
excessive time for measurements and construction of the reliability.
predictor. Thus, the relevant details should be included
explicitly into the approach to shorten the predictor III. REQUIREMENTS
construction time and to raise the accuracy. The aim of the APPEAR method is the support of
As a compromise, the mix of analytical and statistical architects in analyzing the performance of new
techniques for the performance evaluation is considered. applications during the early phases of product
This approach is based on the knowledge of the development.
application structure and the use of statistical methods in In this paper, the performance is considered in terms
order to abstract from irrelevant architectural details. of CPU utilization and end-to-end response time of an
The paper is structured as follows. Section 2 application for different use cases.
summarizes related work. In Section 3, the requirements The essential requirements for the APPEAR method
for the APPEAR method are given. Section 4 describes are the following:
the basic constituents and essential steps of the method. 1. Allow performance prediction of a new software
Sections 5 presents the results of building the part or of a complete software system where some
performance prediction model for a part of a Medical parts are added or modified.
Imaging software system. Finally, Section 6 concludes 2. Allow the localization of performance bottlenecks
the paper and sketches future work. by giving insight into the execution architecture of
the software.
II. RELATED WORK 3. Ensure a reasonable level of accuracy for
During the past decade, significant research effort was performance prediction. The accuracy level is
put into the performance-engineering domain. The main product family dependent. A survey revealed that
investigations were aimed at the development of architects consider an accuracy of 50% to 80% as
methods for the performance estimation of software- a definite improvement with respect to the
intensive systems and defining the theoretical basis for presently used methods.
software performance engineering [7]. One of the most 4. The method should be much faster than
critical issues in software architecting is early implementation and subsequent measurements of
performance estimation, based on architectural the system.
descriptions or executable prototypes.
The classical approaches [7] use queuing network IV. APPEAR METHOD
models, derived from the structural description of the This section sketches the APPEAR method and draws
architecture, and performance-critical use cases. A some assumptions that enable its application.
similar approach that also includes specific architecture
A. Notion of signature
description styles is presented in [1]. A remarkable tool
for the transformation of software architecture The signature of an application is a set of parameters
descriptions into queuing networks and the subsequent that provide sufficient information for performance
performance analysis is described in [8]. estimation.
An interesting approach is proposed in [5]. The We treat the performance as a function over the
executable prototype (a simulation model) generates signature:
traces that are expressed in a specific syntax (angio- P:S →C .
traces). These traces are used for building performance In this formula, S = {S1 , S 2 , , S N } is a signature
prediction models, based on layered queuing networks. vector with parameters Si , and C is a performance
Stochastic Petri nets are also widely used for the metric, like response time.
3. A performance prediction model is created by means and its performance, it is also advisable to construct a
of statistical regression analysis techniques (see e.g. [2] high-level executable model of an application. Such a
and [3]). These techniques define the relation between a simulation model must capture relevant execution
signature of an application and a performance estimate properties of an application. Relevant execution
by discovering the correlation between these two properties are those that have a significant impact on the
entities. Subsequently, this correlation can be used to performance, e.g. the most time consuming service calls
extrapolate to new signature values and to predict the and important input/diversity parameters. These
performance of new applications. execution properties are said to form the signature of an
Indentifying the signature needs answering the application.
following questions:
App lication 1 App licatio n 2 App lication 3
1. Which of the hundreds of parameters have the Variable
strongest impact on the performance?
2. What is the quantitative dependency between S = {S 1 , S 2 , , S5 }
S i − service i
these parameters and the performance? S1 S2 S3 S4 S5
Answering the first question helps to reduce the
Virtu al Service Platform S table
parameter space and to concentrate on the critical
parameters only, while answering the second question
allows one to predict the performance based on the Figure 1. Applications and Virtual Service Platform (VSP).
experimental data.
An example of the signature of a hypothetical The proposed method includes two main parts: (1)
software application could look as follows: calibrating the predictor on the existing applications and
S = {Number of memory allocation calls, Number of (2) applying it to a new application to obtain an estimate
disk calls, Number of network calls} of its performance.
A signature typically includes the types of calls that The steps of the APPEAR method are described
seriously influence the response time of an application. below (see also Figure 2):
Note that it is important to distinguish between the Step 0, Virtual Service Platform identification. The
signature type (see above) and a signature instance that software is divided into two parts: a stable VSP and
contains actual values for a concrete execution scenario, variable applications (see Figure 1). The guideline for
e.g. S = {99, 66, 33}. VSP selection are sketched in section D.
Usually, a signature is built in an iterative way: after Step 1, Definition of use cases for the existing
each step overlooked parameters are added, and applications. The relevant use cases for measuring the
superfluous parameters are excluded. performance of the existent applications are defined.
Step 2, Collection of measurements. The defined
B. Method essence use cases are executed (with different parameters) and
The APPEAR method assumes that the software stack the corresponding performance values are measured.
of a product family consists of two parts: (1) Step 3, Construction of a simulation model. A high-
applications and (2) a Virtual Service Platform (VSP). level simulation model of the execution architecture is
The former consist of components specific for different built to gain insight into the performance of an
products, while the latter comprises a relatively stable application. This supports the extraction of a signature
set of services and does not seriously evolve during the in step 4.
software lifecycle. This is shown in Figure 1. Step 4, Signature extraction for the existing
The stability of the VSP allows one to use the applications. The simulation model is executed (the real
information about its performance for estimating the system was already executed in step 2) in order to
performance of applications that are built upon it. The extract the signature, i.e. to obtain the values of the
signature of both already existing and not yet signature parameters.
implemented applications can be described in terms of Step 5, Construction of a prediction model. Based
service calls to the VSP. By extrapolating the relation on the statistics gained in step 2 and 4, it is possible to
between the measured performance of the existing build and calibrate a predictor that translates a signature
applications and their signature S = {S1 , S2 , , S N } , it is vector into a performance measure. Such a predictor
possible to predict the performance of new applications. may be constructed by employing (linear or non-linear)
To get more insight into the execution architecture statistical regression analysis techniques.
4. 1. Use cases definition 6. Use cases definition (new )
3. M odel construction
Existing applications Abstract sim ulation m odel New application
2. Measurem ents
4. Signature extraction
7. Signature extraction (new )
Signature
5. Training S1 S2 S3 S4 S5 param eters
8. Prediction
Predictor
Figure 2. The steps of the APPEAR method.
Step 6, Definition of the use cases for new other.
applications. After having the predictor calibrated, it is 2. The services of the VSP are independent. Since
possible to use it for assessing the performance of new the service calls are used as input for the
applications. Possibly a new set of use cases has to be prediction model, there should be no interactions
determined for these applications, e.g. if new features that significantly influence the performance, e.g.
are defined. via exclusive access to shared resources.
Step 7, Signature extraction for the new 3. The order of service calls does not matter.
applications. The model of the execution architecture of 4. The application performance can be abstracted
the new applications is simulated with the new use cases with a number of VSP service calls or another
in order to extract the corresponding signature vector. similar metric. It should be possible to obtain the
Step 8, Performance prediction for the new application signature from its simulation model
applications. Provided that the newly obtained and to use this signature to predict performance.
signature agrees1 with the statistics used for calibrating This means that the application must not perform
the predictor, it can be used to estimate the performance CPU-intensive internal calculations. This
of the new application. condition is usually met for embedded control
Notice that the proposed method benefits from an systems, the class of systems we are interested in.
important property: during the evolution of a product 5. Gradual product (family) evolution. During the
family, the statistics upon which the predictor is evolution of a product family, a significant part of
calibrated continuously grows. This enhances the the software remains unchanged. If the new
prediction quality and increases the coverage of the applications are completely new and independent
statistics with respect to the signature space. from the existing parts, the prediction can fail
because of the lack of statistics.
C. Assumptions
6. A large set of applications for training the
The following assumptions must be fulfilled to apply predictor is available.
the APPEAR method:
1. Applications are independent. The applications
interact only with the VSP, but not with each
1
In principle, a newly obtained signature may lie too far away
from the signature space on which the predictor was calibrated. In
this case, we have to deal with so-called outliers.
5. D. Virtual Service Platform identification statistics, and the cross-validation “leave-one-out”
The abstraction level of the VSP can be selected strategy was applied to them. This resulted in the
according to the following criteria: distribution of the relative prediction error shown in
• Within a product family, there is always a Figure 3
relatively stable part and parts that are frequently 0,4
modified or added. The parts that are likely to 0,35
change should be modeled, while the stable parts 0,3
Relative error
0,25
are captured by a predictor. For “new” parts, the 0,2
performance estimation is important at the early 0,15
architecting phases when only high-level 0,1
0,05
descriptions or models are available. The stable 0
part is treated as a “black-box”, addressed by a
1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
Point number
statistical predictor.
• For the variable part, insight into the performance
relevant parts of the execution architecture is Figure 3. The relative prediction error.
needed. This means that a model for the In this figure, one can distinguish three parts: one part
performance critical components must be built. with high prediction accuracy (points 11 to 24) and two
Interactions between these components, their parts with lower accuracy (remaining points). So far,
modification and substitution with other ones can two possible reasons for the occurrence of these “low
influence the application performance. accuracy intervals” can be considered:
• To extract the signature, it must be possible to • There were not enough statistics in the
relate this model to a number of relevant service neighborhood of these points because the points
calls, input parameters, etc. were actually outliers. The construction of the
formula was dominated by the statistics from the
V. METHOD APPLICATION TO A MEDICAL IMAGING intervals containing much more points.
SOFTWARE SYSTEM Consequently, the intervals with larger amount of
This section describes our experience in building a points have higher accuracy.
prediction model for the response time for a part of a • An improper set of basis functions was used for
Medical Imaging software system. This experiment aims the construction of the formula. This set can
at validating the statistical part of the APPEAR method. handle only the points within a certain interval
In parallel, similar experiments are being performed in and fails for the rest of the points. Probably, linear
the Consumer Electronics domain: a prediction model is approximation is not suitable here, and the shapes
built for assessing the CPU utilization of TV software. of basis functions have to be changed.
Finalizing these experiments will allow checking the
applicability of the APPEAR method to the Consumer VI. CONCLUSION
Electronics domain. Because the experiments in the Our experiences in applying pure analytical
Consumer Electronics domain are still running, they are approaches to assessment the performance of industrial
not described here. scale software failed due to combinatorial explosion of
The performance prediction model for the Medical too many parameters. Thus, we decided to choose for a
Imaging software was created and then calibrated with mix of analytical and statistical techniques.
different values of the signature vector as inputs and The APPEAR (Analysis and Prediction of the
application response times (from the traces) as outputs. Performance of Evolving Architectures) method for
This model is intended to predict the response times of performance prediction of software applications during
the new applications, given their signatures. the architecting phase was suggested. This method
The collected statistics was used as input for a tool presumes that an application can be subdivided into two
implementing a Multivariate Adaptive Regressive parts: variable and stable (application and VSP). The
Splines (MARS) algorithm [3]. This tool determines an method includes an analytical part for the explicit
approximation formula for the prediction model. As an description of the execution architecture and a statistical
initial iteration, linear basis functions were used. part for the performance prediction. The execution
Thirty points were randomly selected from the architecture is described in terms of performance
6. relevant input/diversity parameters and the number of REFERENCES
performance relevant calls to the underlying VSP. It is [1] F. Aquilani, S. Balsamo and P. Inverardi, "An Approach to
used to determine the signature. Performance Performance Evaluation of Software Architectures", Research
Report, CS-2000-3, Dipartimento di Informatica Universita Ca'
measurements, collected during the execution of
Foscari di Venezia, Italy, March 2000.
existing applications, together with the signature can be [2] G. Bontempi, “Local Learning Techniques for Modeling,
used for calibrating the predictor. For a new application, Prediction and Control”, PhD thesis, IRIDIA- Universite’ Libre
a model of the execution architecture is constructed in de Bruxelles, Belgium, 1999.
[3] J.H. Friedman, “Multivariate Adaptive Regression Splines”,
order to obtain its signature. This signature is taken as Tech. Report 102, Department of Statistics, Stanford University,
an input for the predictor to get the performance USA, August 1990.
estimation for the new application. [4] K. Goseva-Popstojanova and K.S. Trivedi, "Architecture Based
Criteria for choosing the abstraction level of the VSP Approach to Reliability Assessment of Software Systems",
Performance Evaluation, Vol.45/2-3, June 2001.
were suggested. [5] C.E. Hrischuk, C.M. Woodside and J.A. Rolia, "Trace Based
A simple case study, performed for Medical Imaging Load Characterization for Generating Software Performance
software, resulted in relative prediction error inferior to Models", IEEE Trans. on Software Engineering, Vol. 25, Nr. 1,
pp 122-135, Jan. 1999.
35%. This means that the level of prediction accuracy is [6] P. King and R. Pooley, “Derivation of Petri Net Performance
considerably high with respect to the requirements given Models from UML Specifications of Communications
in section 3. Software”, Proc. 11th Int. Conf. on Tools and Techniques for
However, there is still not enough experimental Computer Performance Evaluation (TOOLS), Schaumburg,
Illinois, USA, 2000.
evidence to ensure that the method will work on a [7] C. Smith and L. Williams, “Performance Solutions: A Practical
broader range of software applications. Also, the Guide to Creating Responsive, Scalable Software”, Addison-
predictor reliability with respect to outliers was not Wesley, 2001.
[8] B. Spitznagel and D. Garlan, “Architecture-based performance
checked because of the lack of data. analysis”, in Yi Deng and Mark Gerken (editors), Proc. 10th
The future work on the APPEAR method will be International Conference on Software Engineering and
performed in the following directions: Knowledge Engineering, pp 146—151, Knowledge Systems
Institute, 1998.
• Identification of the reasons for the varying
prediction accuracy (possible reasons are given in
section V).
• Building a model of the execution architecture of
the applications to validate the structural part of
the method and to automate the signature
extraction process.
• Tackling the compositionality problem in order to
be able to derive the performance of a component-
based architecture from the performance of its
components. This is, however, not a trivial task
because of the involvement of statistics.
• Construction of execution architecture models and
predictive models for more use cases of the
Medical Imaging application.
• Construction of the execution architecture models
and prediction model for an application in the
Consumer Electronics domain (TV software).
VII. ACKNOWLEDGEMENTS
We thank Wim van der Linden for providing us with
all necessary information on statistical methods and
tools. We want to express gratitude to STW that funded
the presented work within the AIMES project
(EWI.4877).