This document discusses unit testing of the Core Flight Software (CFS) product line developed by NASA. It examines how the CFS architecture facilitates or impedes unit testing and how the architecture of test code can be defined based on the system architecture. The CFS uses a unit test architecture with mocks/stubs of dependent modules to enable isolated testing. It finds that defining abstract interfaces and exposing internal details controlled via architectural rules improves testability, and that complete dependency graphs do not inherently imply poor testability.
Verifying Architectural Design Rules of a Flight Software Product LineDharmalingam Ganesan
This document discusses verifying architectural design rules of the Core Flight Software (CFS) product line. [1] It provides background on the CFS, which is a reusable flight software environment developed by NASA. [2] The analysis used tools to check that the CFS implementation follows documented rules regarding dependencies, decomposition, redundancy, and preprocessor usage. [3] It found some minor violations but concluded the CFS team performs rigorous design and code reviews.
This document discusses model-based systems engineering (MBSE) and the use of system modeling languages. It motivates MBSE by describing how system models can integrate requirements, design, analysis and other engineering artifacts. It then provides an overview of the SysML modeling language and how it supports structural, behavioral, requirements and parametric modeling of systems. Finally, it describes how a system architecture model can act as an integrating framework to link various engineering analysis models across the lifecycle.
This document summarizes a project to improve ground system designs for NASA by incorporating human factors principles. It discusses the importance of considering ground crews, outlines pathfinder activities including an overview for design teams and working sessions, and provides examples of human-system integration challenges. The goal is to design systems that are safer and easier for ground crews to operate over 20+ years to reduce costs from mishaps and improve safety.
The document discusses the use of CMMI models in overseeing space flight software development. Specifically:
1) The Spacecraft Software Engineering Branch evaluated the CMMI models and determined CMMI-DEV was most applicable for overseeing software development and software oversight projects.
2) They achieved a CMMI Maturity Level 2 rating by selecting software development and oversight projects, including a software requirements review and system/software review.
3) NASA's surveillance strategies include insight, oversight, or a hybrid approach. Software for the Orion project uses a hybrid approach with pre-declared oversight due to software risks.
The document discusses techniques for achieving dependable software systems through fault tolerance. It describes hardware-based triple modular redundancy and software-based N-version programming. It also discusses challenges with achieving true design diversity and problems that can arise from specification errors. The document concludes with guidelines for dependable programming practices such as limiting information visibility, checking inputs, using exception handling, avoiding error-prone constructs, including restart capabilities and timeouts.
1) The document discusses software testing strategies including improving test design, automation, understanding development processes, and leveraging APIs.
2) It also discusses tactics for team development including understanding customer pains, resolving issues, and contributing to forums and documentation.
3) Finally, it outlines processes for pre-integration testing including expectations for success/failure emails and general product qualification testing.
SolFa is a model-driven development tool for mass-producing software with flexibility. It uses generative techniques like model-to-model, model-to-text, and text-to-model transformations. SolFa is built on Eclipse and allows defining factory components that encapsulate generation logic and data. Factory components can be organized into portfolios and exchanged between projects. SolFa demonstrates how to define a document domain-specific language family using factory components.
Eswaranand is a software test lead with over 8 years of experience defining and executing functional, performance, and automation test strategies across various domains. He has a bachelor's degree in information technology and an MBA in human resources. Currently working as a software test advisor/lead/consultant at Dell, his responsibilities include requirement analysis, test case preparation, automation script creation, and managing a testing team. He has extensive experience in various roles testing applications for healthcare, finance, e-commerce, and other domains.
Verifying Architectural Design Rules of a Flight Software Product LineDharmalingam Ganesan
This document discusses verifying architectural design rules of the Core Flight Software (CFS) product line. [1] It provides background on the CFS, which is a reusable flight software environment developed by NASA. [2] The analysis used tools to check that the CFS implementation follows documented rules regarding dependencies, decomposition, redundancy, and preprocessor usage. [3] It found some minor violations but concluded the CFS team performs rigorous design and code reviews.
This document discusses model-based systems engineering (MBSE) and the use of system modeling languages. It motivates MBSE by describing how system models can integrate requirements, design, analysis and other engineering artifacts. It then provides an overview of the SysML modeling language and how it supports structural, behavioral, requirements and parametric modeling of systems. Finally, it describes how a system architecture model can act as an integrating framework to link various engineering analysis models across the lifecycle.
This document summarizes a project to improve ground system designs for NASA by incorporating human factors principles. It discusses the importance of considering ground crews, outlines pathfinder activities including an overview for design teams and working sessions, and provides examples of human-system integration challenges. The goal is to design systems that are safer and easier for ground crews to operate over 20+ years to reduce costs from mishaps and improve safety.
The document discusses the use of CMMI models in overseeing space flight software development. Specifically:
1) The Spacecraft Software Engineering Branch evaluated the CMMI models and determined CMMI-DEV was most applicable for overseeing software development and software oversight projects.
2) They achieved a CMMI Maturity Level 2 rating by selecting software development and oversight projects, including a software requirements review and system/software review.
3) NASA's surveillance strategies include insight, oversight, or a hybrid approach. Software for the Orion project uses a hybrid approach with pre-declared oversight due to software risks.
The document discusses techniques for achieving dependable software systems through fault tolerance. It describes hardware-based triple modular redundancy and software-based N-version programming. It also discusses challenges with achieving true design diversity and problems that can arise from specification errors. The document concludes with guidelines for dependable programming practices such as limiting information visibility, checking inputs, using exception handling, avoiding error-prone constructs, including restart capabilities and timeouts.
1) The document discusses software testing strategies including improving test design, automation, understanding development processes, and leveraging APIs.
2) It also discusses tactics for team development including understanding customer pains, resolving issues, and contributing to forums and documentation.
3) Finally, it outlines processes for pre-integration testing including expectations for success/failure emails and general product qualification testing.
SolFa is a model-driven development tool for mass-producing software with flexibility. It uses generative techniques like model-to-model, model-to-text, and text-to-model transformations. SolFa is built on Eclipse and allows defining factory components that encapsulate generation logic and data. Factory components can be organized into portfolios and exchanged between projects. SolFa demonstrates how to define a document domain-specific language family using factory components.
Eswaranand is a software test lead with over 8 years of experience defining and executing functional, performance, and automation test strategies across various domains. He has a bachelor's degree in information technology and an MBA in human resources. Currently working as a software test advisor/lead/consultant at Dell, his responsibilities include requirement analysis, test case preparation, automation script creation, and managing a testing team. He has extensive experience in various roles testing applications for healthcare, finance, e-commerce, and other domains.
This document discusses software process models and activities. It introduces three generic process models: waterfall, evolutionary development, and component-based development. It also describes the Rational Unified Process model and the spiral model. The key activities discussed are requirements engineering, software design, implementation, validation, and evolution. Iterative development approaches like incremental delivery and extreme programming are also covered.
A framework for distributed control and building performance simulationDaniele Gianni
Presentation delivered at the 3rd IEEE Track on
Collaborative Modeling & Simulation - CoMetS'12.
Please see http://www.sel.uniroma2.it/comets12/ for further details.
The document discusses Nicolas De Loof's background and experience in the Java and open source software communities. It then provides an overview of what a software factory is and lists its typical components. The document discusses choosing Git and Maven as version control and build tools respectively, and Jenkins as the automation and continuous integration tool. It then discusses using a platform-as-a-service model rather than on-premises containers to host the software factory components.
Continuous Integration to Shift Left Testing Across the Enterprise StackDevOps.com
With the move to agile DevOps, automated testing is a critical function to ensure high quality in continuous deployments.
In this session, learn how to start testing earlier and often to ensure quality in your codebase. Join Architect Suman Gopinath and Offering Manager Korinne Alpers to talk about shifting-left in the development cycle, starting with unit testing as a key aspect of continuous integration. You'll view a demo of the latest zUnit unit testing tooling for CICS Db2 applications, as well as hear best practices and tales from the testing trenches.
HTAF 2.0 - A hybrid test automation framework.Mindtree Ltd.
HTAF is a test automation framework developed by Mindtree that bridges the gap between domain experts who lack automation expertise and automation experts who lack functional knowledge. It is a customizable framework built on HP QuickTest Professional that reduces the test automation lifecycle by accelerating script development, execution, and management through an intuitive interface and support for both data-driven and keyword-driven methodologies. Spreadsheet-driven tests can be created and executed by QA staff with minimal programming knowledge.
The document discusses the Ares I-X test flight conducted by NASA in October 2009. It provides background on the objectives and significance of the flight test. It highlights that healthy tension between the flight test's Mission Management Office and Technical Authorities was important to the flight test's success. It then discusses NASA's governance model and how technical authority is implemented. Specifically, it notes the Chief Engineer and Chief of Safety and Mission Assurance represented their communities and helped achieve an appropriate balance between constraints and risk. Information flow between groups was a key factor for the multi-center team's cooperation and success.
This document discusses design for testability. It defines testability as having controllability and visibility. Controllability is the ability to apply inputs and place a system in specified states, while visibility is the ability to observe states and outputs. The document outlines why testability is important for improving quality and reducing costs. It describes how to achieve testability through good design practices like abstraction, encapsulation, and avoiding interdependence. Testability features like logging and assertions are also recommended. Development techniques like defensive programming, design by contract, and test-driven development can further enhance testability.
Blue Monitor Systems is an employee-owned company dedicated to delivering high-quality creative, technical, and scientific services worldwide. The company encourages employees to think like owners and contribute to social well-being. Blue Monitor uses an iterative "Zero Time" method combining Agile and traditional approaches for medium and large projects. This includes continuous integration, test-driven development, and matrix project teams with specialists in design, engineering, testing, and operations.
This document provides guidelines for creating a folder structure for testing an e-commerce portal called IBEEeCom. It outlines creating a main project folder with subfolders for modules like Admin and User. It also describes subfolders within the Admin folder for requirements, reports, change requests, test cases, defects and automation. The automation subfolder further divides test automation documents between QuickTest Professional and LoadRunner. Testers are instructed to first create the project folder structure locally and then check it into version control.
This document summarizes a keynote presentation about IBM's quality management products and strategies. The presentation discusses real challenges faced by development teams, real results achieved by IBM products in 2008, and real insights into improving quality management. It provides an overview of IBM's quality management portfolio and roadmap for continued enhancements.
Basics to have competitive advantage of S/W in global MarketYoung On Kim
Brief introduction of S/W architecture, process and reuse, which might have not been succeeded to deliver the committed benefits in Korean market. I presented it in 2007, revisited today and felt to share this presentation with you.
Gen sessionthomas.riskofsystemproblemfinal23feb12NASAPMC
1) Three roles are required for any system program to succeed: design and integration, management, and build component. No single individual can fulfill all roles and it requires a multidisciplinary team.
2) Controlling program risk, especially the risk of system problems, is key responsibilities of both the management team and systems engineering and integration team.
3) The systems engineering and integration team works to ensure the whole system behaves as expected by properly engineering interfaces between subsystems.
The document discusses object-oriented programming and its advantages over procedural programming. It introduces key concepts of OOP like encapsulation, data hiding, and modeling real-world objects. Object-oriented programming aims to make software easier to develop and maintain by closely modeling the problem domain. This approach can reduce costs and errors while improving readability, reusability and flexibility of code. The document uses examples to illustrate object-oriented concepts and how they are implemented in C++.
The document proposes a new approach to testing complex ERP implementations using services. Key points:
- Traditional ERP testing is repetitive and resource-intensive due to the many business processes, rules, and scenarios.
- The approach records a base test scenario that executes the main business process. It then automatically injects different business attribute values through services to test different scenarios without re-recording each test.
- The architecture supports this by allowing data to enter the system through a service interface rather than just the GUI. Recorded scenarios and test data are stored to enable automated testing of various scenarios from the base process.
DevOps for the Mainframe aims to leverage continuous integration, cloud technologies, and beyond to deliver z/OS applications. The document discusses how DevOps principles can help enable rapid evolution of deployed z/OS services by reducing risk, decreasing costs, and improving quality. It provides examples of how tools from IBM can help implement a continuous delivery pipeline for mainframe development and testing that incorporates automated testing, configuration, and deployment.
This document discusses complex system-level design integration and provides methods to address bottlenecks. It introduces the Reliable Integral SoC Methodology which uses IP abstraction models and specification sheets to capture system designs. This allows automatic integration and verification environment generation, reducing design time from 12-16 months to 6 months. FPGA emulation platforms and dual-ware technology are also discussed to help with system verification and software development.
The document describes creating a software development lifecycle (SDLC) using the waterfall model and data flow diagram principles, with the goal of optimizing the SDLC for measurement and analysis. It instructs taking the initial SDLC and adding phases/stages to reach a second level of productivity in analysis. Requirements include creating a workflow, adding assumptions, and structuring phases to optimize the SDLC.
Ui Modeling In Action With PMF, e4(XWT) And EGFBENOIT_LANGLOIS
The document discusses UI modeling using PMF and EGF. It introduces PMF as a platform-independent UI modeling tool and EGF as a model-to-text generation framework. It then summarizes how PMF models can be transformed to XWT user interfaces using patterns and factories in EGF. Key benefits highlighted include separating UI development roles, integrating with Eclipse tools, and enabling customizable UI generation.
R&M Technologies provides reliability, maintainability and logistics support analysis services. It developed RamLog software in 1992 to manage lifecycle logistics data. RamLog includes capabilities like FMECA, RCM analysis, maintenance task analysis, technical manual authoring, and a simulation edition to model system operations and support over the lifecycle. RamLog integrates with RAMLOG.NET for transactional database support.
Architecture Analysis of Systems based on Publish-Subscribe SystemsDharmalingam Ganesan
This document discusses analyzing systems based on the publisher-subscriber architectural style. It describes analyzing a NASA system called GMSEC that uses this style. Static analysis is used to understand how components connect and communicate. Dynamic analysis involves monitoring a running system to understand behavior, such as which subscribers receive messages from which publishers. This analysis uncovered a high-priority bug and provided valuable insights into GMSEC.
The document discusses AF3, an open source tool framework for seamless model-based development. It provides specification languages, analyses like simulation and verification, and code generators. Several topics are covered, including model-based requirements analysis, automatic testcase generation using constraint logic programming, integrating model checking using temporal logic patterns, pervasive deployment and code synthesis from models, and multi-criteria synthesis for optimizing deployment to meet timing, energy efficiency and memory constraints. FPGA code generation from models is also mentioned.
This document summarizes an approach to automated testing of large, multi-language software systems using cloud computing. It discusses applying a lightweight, model-based test generation and execution approach to several NASA projects, including GMSEC, Core Flight Software, Space Network, and Mars Science Laboratory. Models of system APIs and interfaces are developed and used to automatically generate test cases, finding bugs. The approach has been successfully transferred to other teams and is being applied to additional complex NASA systems.
This document discusses software process models and activities. It introduces three generic process models: waterfall, evolutionary development, and component-based development. It also describes the Rational Unified Process model and the spiral model. The key activities discussed are requirements engineering, software design, implementation, validation, and evolution. Iterative development approaches like incremental delivery and extreme programming are also covered.
A framework for distributed control and building performance simulationDaniele Gianni
Presentation delivered at the 3rd IEEE Track on
Collaborative Modeling & Simulation - CoMetS'12.
Please see http://www.sel.uniroma2.it/comets12/ for further details.
The document discusses Nicolas De Loof's background and experience in the Java and open source software communities. It then provides an overview of what a software factory is and lists its typical components. The document discusses choosing Git and Maven as version control and build tools respectively, and Jenkins as the automation and continuous integration tool. It then discusses using a platform-as-a-service model rather than on-premises containers to host the software factory components.
Continuous Integration to Shift Left Testing Across the Enterprise StackDevOps.com
With the move to agile DevOps, automated testing is a critical function to ensure high quality in continuous deployments.
In this session, learn how to start testing earlier and often to ensure quality in your codebase. Join Architect Suman Gopinath and Offering Manager Korinne Alpers to talk about shifting-left in the development cycle, starting with unit testing as a key aspect of continuous integration. You'll view a demo of the latest zUnit unit testing tooling for CICS Db2 applications, as well as hear best practices and tales from the testing trenches.
HTAF 2.0 - A hybrid test automation framework.Mindtree Ltd.
HTAF is a test automation framework developed by Mindtree that bridges the gap between domain experts who lack automation expertise and automation experts who lack functional knowledge. It is a customizable framework built on HP QuickTest Professional that reduces the test automation lifecycle by accelerating script development, execution, and management through an intuitive interface and support for both data-driven and keyword-driven methodologies. Spreadsheet-driven tests can be created and executed by QA staff with minimal programming knowledge.
The document discusses the Ares I-X test flight conducted by NASA in October 2009. It provides background on the objectives and significance of the flight test. It highlights that healthy tension between the flight test's Mission Management Office and Technical Authorities was important to the flight test's success. It then discusses NASA's governance model and how technical authority is implemented. Specifically, it notes the Chief Engineer and Chief of Safety and Mission Assurance represented their communities and helped achieve an appropriate balance between constraints and risk. Information flow between groups was a key factor for the multi-center team's cooperation and success.
This document discusses design for testability. It defines testability as having controllability and visibility. Controllability is the ability to apply inputs and place a system in specified states, while visibility is the ability to observe states and outputs. The document outlines why testability is important for improving quality and reducing costs. It describes how to achieve testability through good design practices like abstraction, encapsulation, and avoiding interdependence. Testability features like logging and assertions are also recommended. Development techniques like defensive programming, design by contract, and test-driven development can further enhance testability.
Blue Monitor Systems is an employee-owned company dedicated to delivering high-quality creative, technical, and scientific services worldwide. The company encourages employees to think like owners and contribute to social well-being. Blue Monitor uses an iterative "Zero Time" method combining Agile and traditional approaches for medium and large projects. This includes continuous integration, test-driven development, and matrix project teams with specialists in design, engineering, testing, and operations.
This document provides guidelines for creating a folder structure for testing an e-commerce portal called IBEEeCom. It outlines creating a main project folder with subfolders for modules like Admin and User. It also describes subfolders within the Admin folder for requirements, reports, change requests, test cases, defects and automation. The automation subfolder further divides test automation documents between QuickTest Professional and LoadRunner. Testers are instructed to first create the project folder structure locally and then check it into version control.
This document summarizes a keynote presentation about IBM's quality management products and strategies. The presentation discusses real challenges faced by development teams, real results achieved by IBM products in 2008, and real insights into improving quality management. It provides an overview of IBM's quality management portfolio and roadmap for continued enhancements.
Basics to have competitive advantage of S/W in global MarketYoung On Kim
Brief introduction of S/W architecture, process and reuse, which might have not been succeeded to deliver the committed benefits in Korean market. I presented it in 2007, revisited today and felt to share this presentation with you.
Gen sessionthomas.riskofsystemproblemfinal23feb12NASAPMC
1) Three roles are required for any system program to succeed: design and integration, management, and build component. No single individual can fulfill all roles and it requires a multidisciplinary team.
2) Controlling program risk, especially the risk of system problems, is key responsibilities of both the management team and systems engineering and integration team.
3) The systems engineering and integration team works to ensure the whole system behaves as expected by properly engineering interfaces between subsystems.
The document discusses object-oriented programming and its advantages over procedural programming. It introduces key concepts of OOP like encapsulation, data hiding, and modeling real-world objects. Object-oriented programming aims to make software easier to develop and maintain by closely modeling the problem domain. This approach can reduce costs and errors while improving readability, reusability and flexibility of code. The document uses examples to illustrate object-oriented concepts and how they are implemented in C++.
The document proposes a new approach to testing complex ERP implementations using services. Key points:
- Traditional ERP testing is repetitive and resource-intensive due to the many business processes, rules, and scenarios.
- The approach records a base test scenario that executes the main business process. It then automatically injects different business attribute values through services to test different scenarios without re-recording each test.
- The architecture supports this by allowing data to enter the system through a service interface rather than just the GUI. Recorded scenarios and test data are stored to enable automated testing of various scenarios from the base process.
DevOps for the Mainframe aims to leverage continuous integration, cloud technologies, and beyond to deliver z/OS applications. The document discusses how DevOps principles can help enable rapid evolution of deployed z/OS services by reducing risk, decreasing costs, and improving quality. It provides examples of how tools from IBM can help implement a continuous delivery pipeline for mainframe development and testing that incorporates automated testing, configuration, and deployment.
This document discusses complex system-level design integration and provides methods to address bottlenecks. It introduces the Reliable Integral SoC Methodology which uses IP abstraction models and specification sheets to capture system designs. This allows automatic integration and verification environment generation, reducing design time from 12-16 months to 6 months. FPGA emulation platforms and dual-ware technology are also discussed to help with system verification and software development.
The document describes creating a software development lifecycle (SDLC) using the waterfall model and data flow diagram principles, with the goal of optimizing the SDLC for measurement and analysis. It instructs taking the initial SDLC and adding phases/stages to reach a second level of productivity in analysis. Requirements include creating a workflow, adding assumptions, and structuring phases to optimize the SDLC.
Ui Modeling In Action With PMF, e4(XWT) And EGFBENOIT_LANGLOIS
The document discusses UI modeling using PMF and EGF. It introduces PMF as a platform-independent UI modeling tool and EGF as a model-to-text generation framework. It then summarizes how PMF models can be transformed to XWT user interfaces using patterns and factories in EGF. Key benefits highlighted include separating UI development roles, integrating with Eclipse tools, and enabling customizable UI generation.
R&M Technologies provides reliability, maintainability and logistics support analysis services. It developed RamLog software in 1992 to manage lifecycle logistics data. RamLog includes capabilities like FMECA, RCM analysis, maintenance task analysis, technical manual authoring, and a simulation edition to model system operations and support over the lifecycle. RamLog integrates with RAMLOG.NET for transactional database support.
Architecture Analysis of Systems based on Publish-Subscribe SystemsDharmalingam Ganesan
This document discusses analyzing systems based on the publisher-subscriber architectural style. It describes analyzing a NASA system called GMSEC that uses this style. Static analysis is used to understand how components connect and communicate. Dynamic analysis involves monitoring a running system to understand behavior, such as which subscribers receive messages from which publishers. This analysis uncovered a high-priority bug and provided valuable insights into GMSEC.
The document discusses AF3, an open source tool framework for seamless model-based development. It provides specification languages, analyses like simulation and verification, and code generators. Several topics are covered, including model-based requirements analysis, automatic testcase generation using constraint logic programming, integrating model checking using temporal logic patterns, pervasive deployment and code synthesis from models, and multi-criteria synthesis for optimizing deployment to meet timing, energy efficiency and memory constraints. FPGA code generation from models is also mentioned.
This document summarizes an approach to automated testing of large, multi-language software systems using cloud computing. It discusses applying a lightweight, model-based test generation and execution approach to several NASA projects, including GMSEC, Core Flight Software, Space Network, and Mars Science Laboratory. Models of system APIs and interfaces are developed and used to automatically generate test cases, finding bugs. The approach has been successfully transferred to other teams and is being applied to additional complex NASA systems.
The document discusses automated test generation for flight software using model-based testing. It describes problems with current manual testing approaches and how model-based testing can generate test cases from models of the system behavior. The Operating System Abstraction Layer (OSAL) used in NASA flight software is presented as a case study. Models of OSAL file system APIs were created and test cases in C were automatically generated from the models to test OSAL functionality.
This document discusses the key success factors for automating the migration of large business applications from Cobol to Java. It describes Eranea's process for automatically transcoding Cobol code into semantically equivalent Java code using their NeaTranscoder tool. Testing ensures the legacy and new systems perform identically. A progressive migration allows switching components incrementally while maintaining full functionality. Automation, iso-functionality testing, and progressive migration are identified as key success factors for large-scale automated migration projects.
The document describes the Singularity project, which aims to redesign operating system architectures and software stacks to improve dependability and trustworthiness. The key architectural features of Singularity systems are software-isolated processes (SIPs) for isolation, contract-based channels for communication between SIPs, and manifest-based programs (MBPs) for verification of system properties. SIPs provide lightweight process isolation through type safety instead of hardware protection. Communication between SIPs occurs via channels defined by message contracts. MBPs specify the code and behavior of processes.
Model-based Testing of a Software Bus - Applied on Core Flight ExecutiveDharmalingam Ganesan
This document discusses model-based testing of a software bus applied to the Core Flight Executive system. It describes traditional automated testing methods and their limitations. Model-based testing uses a model of the system under test to automatically generate test cases. The authors developed a model of the Core Flight Executive software bus in Spec Explorer to generate test cases covering behaviors like message creation, subscription, and sending. This allowed rigorous testing of the multi-tasking architecture.
Shweta Bijay has over 15 years of experience in software testing and development. She has worked on projects at F5 Networks, Microsoft, and Reliance Energy Ltd. Some of her roles included performing functional testing, writing test automation, designing and developing testing tools, and measuring impact of technologies like harmonics. She has expertise in languages like Python, C#, and tools like Perforce, Eclipse, SQL, and Agile methodologies like Scrum.
The document summarizes Dan Petrisko's summer internship validating chips at Intel. It discusses his work scripting tests to automate validation of System on Chips (SoCs), emulating processors to test for logical correctness, and testing chips post-silicon to ensure features work as expected and identify hardware bugs. The lessons learned are that flexibility in testing is important, documentation and code organization help testing, emulation is useful for testing on silicon and RTL, and post-silicon bugs are expensive and difficult to find.
PrimeSoft Solutions was contracted to develop a UMA Handset Simulator for a client. An 18-member team at PrimeSoft's offshore development center was created to work on the project. The team developed the simulator through requirements analysis, specifications, design, coding, testing, and delivering documentation. The UMA Simulator allows automated testing of UMA functionality and supports complex handover scenarios. PrimeSoft also supports long-term testing of the client's Multi Access Gateway product through manual and automation test plans.
The document proposes a host simulation approach to developing embedded applications on desktop computers. This allows leveraging high-performance processors and host resources to validate applications earlier without waiting for IP specifications or simulated platforms. Specifically, it involves virtualizing an entire system-on-chip on the host debugger by identifying a common layer above the target hardware and porting the software development environment, enabling execution of a full application using host peripherals. This reduces time to market by facilitating faster software development and validation on more powerful hardware before silicon samples are available.
Data driven automation testing of web applications using seleniumanandseelan
This document discusses data driven automation testing of web applications using Selenium. It provides an overview of Selenium and some key considerations for choosing an automation testing tool. It then describes the typical components of a Selenium-based automation testing framework, including test scripts, reusable libraries, test suites, reports, and more. It discusses the advantages and limitations of the Selenium IDE and RC tools.
Tivoli Development Cloud Pennock Final WebKennisportal
The document discusses IBM's Tivoli Development Cloud Initiative. It describes how IBM used cloud computing to improve its software development and testing environment. Previously, setting up test environments was slow and inefficient. IBM created a private cloud using virtualization that allows developers to quickly and easily provision virtual test environments on demand. This has significantly reduced costs and improved productivity by reducing environment setup times from weeks to hours. Key benefits included optimized resource utilization, reduced administration costs, and an ability to rapidly scale test capabilities up or down as needed.
Ming Liu has over 10 years of experience in embedded software development and testing using languages like C/C++, Python, and Shell scripts. He has worked as a Validation Engineer at Marvell Semiconductor testing WiFi chip features and bringing up new products. Prior to that, he was a Verification Engineer at Wind River testing their VxWorks operating system and a Software Designer at Nortel Networks developing features for their UMTS wireless system.
The document provides an overview of the Security Content Automation Protocol (SCAP), which is a set of standards for automating security compliance and vulnerability management. SCAP includes six specifications for representing system state, vulnerabilities, and security configurations. It allows organizations to standardize how they express security information and ensure interoperability of security tools. The specifications include languages like OVAL and XCCDF, enumerations like CVE and CCE, and scoring systems like CVSS. SCAP content combines elements from the specifications to assess security configuration compliance.
Vayavya Labs is a company that develops system level design tools and provides embedded design services. It has created DDGEN, the world's first automated device driver generator, which can significantly reduce the cost and efforts required for device driver development. DDGEN takes hardware specification files as input and generates fully functional device drivers and test code. It supports a range of device complexities and operating systems. Pilot results found DDGEN provided close to 200-300% reductions in time and effort for driver development.
This document provides an overview of how to use Team Foundation Server (TFS) to manage the development lifecycle of SharePoint solutions. It describes how developers can use TFS for source control, work item tracking, building and deploying solutions, running tests, and releasing to staging and production environments. Key aspects covered include integrating Visual Studio projects with TFS, running daily builds, testing using virtual machines, and deploying solutions using WSP packages.
Cynthia Everett is a software engineer with over 15 years of experience developing safety-critical software for NASA and Halliburton. She has extensive skills in languages like Java, C++, and Fortran and has experience in all phases of the software development lifecycle. At her previous role at United Space Alliance, she received awards for her work developing a Java-based spacecraft trajectory simulation and providing quality assurance support. She is currently a senior software developer at Halliburton where she maintains production enhancement software.
This document is a resume for Manu Vamadevan seeking an IT position dealing with cutting edge technologies. It summarizes his 6 years of experience in areas like automation, system integration, administration, security and monitoring. He has extensive experience designing and developing middleware applications in Java and customizing various open source tools. He also has strong skills in operations, automation, testing and process management.
A Software Factory Integrating Rational Team Concert and WebSphere toolsProlifics
Speakers: Greg Hodgkinson, Prolifics; Andre Tost, IBM
Description: Getting any software development team to effectively scale to meet the needs of a large integration project is actually harder than it sounds. For a large Automotive Retailer based in Florida, this is exactly what they needed to do. They needed a large amount of integration to be built between their brand new Point of Sales system and their new SAP back-end. In this session, you will hear about how tools such as Rational Software Architect and WebSphere Message Broker Toolkit were integrated with a Rational Team Concert-based development environment to set up super efficient software factory employing techniques such as Model-Driven Development and Continuous Integration to help this retailer keep their customers’ wheels on the road.
Similar to Analysis of Testability of a Flight Software Product Line (20)
The document discusses serialization and deserialization security vulnerabilities. It provides an overview of serialization and deserialization, how attackers can exploit them, and some best practices to prevent exploits. Specifically, it demonstrates how the .NET BinaryFormatter can be insecure by allowing arbitrary code execution through deserialization of untrusted data streams containing unexpected types or callbacks. The presentation recommends avoiding BinaryFormatter and validating serialized data to prevent attacks.
This document discusses reverse architecting software by extracting relationships from source code using relation algebra. It describes extracting relations from code without compiling or linking, storing them in a database, and applying relation algebra operations like join and inverse to abstract the relations. The abstracted relations can then be visualized as graphs or tables to understand aspects of the software architecture like inter-task communication and message queue usage. Reverse architecting is challenging but relation algebra can help reformulate many analysis questions and filter irrelevant data to meet analysis goals.
The document summarizes how predictable random number generators like rand() can be exploited to identify cryptographic keys. It shows that rand() has a predictable behavior based on its seed value. An attacker who knows the time of key generation can initialize rand() with seeds from that time interval and generate a small list of potential keys that need to be tried. As a solution, it recommends using the more secure random number generator from /dev/urandom which is less predictable.
We study the behavior of the RSA trapdoor function by repeatedly encrypting the ciphertext sent over the public channel. We discuss the problem of finding a cycle in order to reverse the plaintext from the given ciphertext. Simple demos and algorithms/python programs are also presented. While the attack is not necessarily practical, it is educational to learn how the RSA trapdoor function behaves.
We look into the nitty-gritty details of the RSA key generation algorithm. We study how RSA can be exploited when the public exponent e is not chosen carefully. We examine why many digital certificates use e=65537. We also experiment with Hastad's broadcast attack for short RSA exponents in particular.
We study the internal structure of the SRP key exchange protocol and experiment with it. SRP establishes a shared encryption key between communicating parties using passwords that were shared out-of-band. We perform basic cryptanalysis of SRP using open-source implementations. We present a demo of how SRP was compromised due to an implementation bug, allowing the attacker to login without the password. The author of the Go-SRP library promptly fixed the issue on the very same day we reported the vulnerability.
We allow Eve to modify DH parameters as well as public keys of Alice and Bob. This allows Eve to derive the secret key and break the DH crypto system. We demonstrate that the DH key exchange algorithm should not be used without digital signatures.
This was an invited talk at the Central Middle School, Maryland. Without going into a lot of math, I try to explain the fundamental key exchange problem. It was a blast. 8th graders enjoyed it as much as I enjoyed it.
Can we reveal the RSA private exponent d from its public key <e, n>? We study this question for two specific cases: e = 3 and e = 65537. Using demos, we verify that RSA reveals the most significant half of the private exponent d when the public exponent e is small. For example, for 2048-bit RSA, the most significant 1024 bits are revealed!
Computing the Square Roots of Unity to break RSA using Quantum AlgorithmsDharmalingam Ganesan
We study the problem of finding the square roots of unity in a finite group in order to factor composite numbers used in RSA. We implemented Peter Shor’s algorithm to find the square root of unity. Experimental results showed that finding the square roots of unity in a finite group multiplicative group is “hard”.
We experiment with Wiener's attack to break RSA when the secret exponent is short, meaning it is smaller than one quarter of the public modulus size. We discuss cryptanalysis details and present demos of the attack. Our very minor extension of Wiener's attack is also discussed.
If we have an RSA 2048 bits configuration, but our private exponent d is only about 512 bits, then the above attack breaks RSA in a few seconds.
This work uses Continued Fractions to derive the private keys from the given public keys. It turned out that one can derive the private exponent d by approximating it as a ratio of e/n, both are public values.
In a default settings of standard RSA libaries, this attack and my minor extension are not relevant (to the best of our knowledge). However, if we configure our library to choose a very large public encryption exponent e, then our private decryption exponent d could be short enough to mount an attack.
An RSA private key is made of a few private variables. We analyze how these private variables are chained together. Further, we study if one of the private variables is leaked, can we derive the other private variables? Demos of the algorithms are also provided.
This document analyzes the security implications of sharing the same RSA modulus n between two users. It presents three algorithms that an attacker could use to break RSA encryption if the public keys for two users share the same n value. Algorithm 1 works if the public exponents are relatively prime. Algorithm 2 works for small public exponents by factoring n. Algorithm 3 directly factors n from the private exponent. The conclusion is that RSA is breakable if n is not unique per user.
The slides demonstrate how to reverse the plaintext from the RSA encrypted ciphertext using an oracle that answers the question: is the last bit of the message 0 or 1?
This document describes an RSA two-person game designed to demonstrate how an adversary could exploit the homomorphic property of raw RSA encryption to break the system. It involves a challenger generating an RSA public/private key pair and encrypting a secret message. The adversary is able to obtain encryptions of arbitrary messages and uses the homomorphic property that the product of ciphertexts corresponds to the product of plaintexts to deduce the secret. Through a series of chosen plaintext/ciphertext queries, the adversary is able to compute the secret plaintext and win the game. The goal is to understand the vulnerabilities in raw RSA and how padding can strengthen the system.
The slides demonstrate how to break RSA when used incorrectly without integrity checks. The man-in-the-middle is allowed to edit the RSA public exponent e in such a way that the Extended Euclidean Algorithm can be employed to reconstruct the plaintexts from the given ciphertexts.
Slides demonstrate how to break RSA when no padding is applied. I replicated the meet-in-the-middle attack discussed in the existing Crypto literature.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
High performance Serverless Java on AWS- GoTo Amsterdam 2024Vadym Kazulkin
Java is for many years one of the most popular programming languages, but it used to have hard times in the Serverless community. Java is known for its high cold start times and high memory footprint, comparing to other programming languages like Node.js and Python. In this talk I'll look at the general best practices and techniques we can use to decrease memory consumption, cold start times for Java Serverless development on AWS including GraalVM (Native Image) and AWS own offering SnapStart based on Firecracker microVM snapshot and restore and CRaC (Coordinated Restore at Checkpoint) runtime hooks. I'll also provide a lot of benchmarking on Lambda functions trying out various deployment package sizes, Lambda memory settings, Java compilation options and HTTP (a)synchronous clients and measure their impact on cold and warm start times.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.