Testify AS provides model-based testing and machine learning-based testing services. They presented a concept for smart test optimization using historical test execution data from the ecFeed platform. Their approach applies search-based algorithms like NSGA-II to prioritize test cases based on cost and effectiveness measures derived from the data to find an optimal test set. They implemented a prototype and plan to evaluate it on real customer data from ecFeed, and to explore additional machine learning techniques going forward.
Effective Software Test Case Design Approach highlights typical wrong approaches to software test case design and focuses on an effective methodology in test case design from a collaborative approach.
Through the use of an example requirement/user story, this presentation highlights the "interactions" between the stakeholders, i.e. Product Owner, Developer, and Test Engineer in the development of user story acceptance criteria, details, test scope, and effective, consistent and valid test cases.
In this article, we will talk about test cases and test scenarios. We will see their definitions and try to understand the differences between the two. These two are a part of software testing.
Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. Model validation techniques, such as k-fold cross-validation, use this historical data to estimate how well a model will perform in the future. However, little is known about how accurate the performance estimates of these model validation techniques tend to be. In this paper, we set out to investigate the bias and variance of model validation techniques in the domain of defect prediction. A preliminary analysis of 101 publicly available defect prediction datasets suggests that 77% of them are highly susceptible to producing unstable results. Hence, selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of data from 18 systems that span both open-source and proprietary domains, we derive the following practical guidelines for future defect prediction studies: (1) the single holdout validation techniques should be avoided; and (2) researchers should use the out-of-sample bootstrap validation technique instead of holdout or the commonly-used cross-validation techniques.
'Customer Testing & Quality In Outsourced Development - A Story From An Insur...TEST Huddle
The insurance company made the decision to outsource most of its IT development and technical maintenance to suppliers. This demanded new requirements to testing and quality ensuring in the company and raised a lot of questions:
- How do we ensure that suppliers perform a test which provides a solution that is not filled with
defects?
- What are the responsibilities for the test activities between supplier and customer?
- How do we ensure effective testing without delays due to misunderstandings between supplier and
tester?
- What are the test criteria to the supplier and how should they report these?
- How do we ensure that test material used by one supplier for development can be re-used by another
supplier for maintenance testing in future?
- How is defect handling, test reporting etc. best done between supplier and customer?
From this, the company created a new test model and test policy which includes setting test- and quality requirements for the supplier. The model has a defined test contract appendix which sets the requirements for the suppliers. These include that suppliers in future should use the company’s own templates and must uphold the company’s test policy. This was done to ensure that all suppliers were following the same guidelines, as many projects had more than one supplier as part of application- and technical developments. The model has a high focus on test quality ensuring, test reporting and approval in each test phase, according to the defined acceptance criteria.
In-house, the company had a focus on communicating and educating anyone working as testers within acceptance tests, or who worked as test managers. This was to ensure that they were adequately trained to perform test activity of high quality, had the competencies to ensure test quality from suppliers and to ensure that delivery by suppliers was as required.During implementation of the new model there was a specific focus on communication with, and
approval by, management to ensure success.
Effective Software Test Case Design Approach highlights typical wrong approaches to software test case design and focuses on an effective methodology in test case design from a collaborative approach.
Through the use of an example requirement/user story, this presentation highlights the "interactions" between the stakeholders, i.e. Product Owner, Developer, and Test Engineer in the development of user story acceptance criteria, details, test scope, and effective, consistent and valid test cases.
In this article, we will talk about test cases and test scenarios. We will see their definitions and try to understand the differences between the two. These two are a part of software testing.
Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. Model validation techniques, such as k-fold cross-validation, use this historical data to estimate how well a model will perform in the future. However, little is known about how accurate the performance estimates of these model validation techniques tend to be. In this paper, we set out to investigate the bias and variance of model validation techniques in the domain of defect prediction. A preliminary analysis of 101 publicly available defect prediction datasets suggests that 77% of them are highly susceptible to producing unstable results. Hence, selecting an appropriate model validation technique is a critical experimental design choice. Based on an analysis of 256 studies in the defect prediction literature, we select the 12 most commonly adopted model validation techniques for evaluation. Through a case study of data from 18 systems that span both open-source and proprietary domains, we derive the following practical guidelines for future defect prediction studies: (1) the single holdout validation techniques should be avoided; and (2) researchers should use the out-of-sample bootstrap validation technique instead of holdout or the commonly-used cross-validation techniques.
'Customer Testing & Quality In Outsourced Development - A Story From An Insur...TEST Huddle
The insurance company made the decision to outsource most of its IT development and technical maintenance to suppliers. This demanded new requirements to testing and quality ensuring in the company and raised a lot of questions:
- How do we ensure that suppliers perform a test which provides a solution that is not filled with
defects?
- What are the responsibilities for the test activities between supplier and customer?
- How do we ensure effective testing without delays due to misunderstandings between supplier and
tester?
- What are the test criteria to the supplier and how should they report these?
- How do we ensure that test material used by one supplier for development can be re-used by another
supplier for maintenance testing in future?
- How is defect handling, test reporting etc. best done between supplier and customer?
From this, the company created a new test model and test policy which includes setting test- and quality requirements for the supplier. The model has a defined test contract appendix which sets the requirements for the suppliers. These include that suppliers in future should use the company’s own templates and must uphold the company’s test policy. This was done to ensure that all suppliers were following the same guidelines, as many projects had more than one supplier as part of application- and technical developments. The model has a high focus on test quality ensuring, test reporting and approval in each test phase, according to the defined acceptance criteria.
In-house, the company had a focus on communicating and educating anyone working as testers within acceptance tests, or who worked as test managers. This was to ensure that they were adequately trained to perform test activity of high quality, had the competencies to ensure test quality from suppliers and to ensure that delivery by suppliers was as required.During implementation of the new model there was a specific focus on communication with, and
approval by, management to ensure success.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Tim Koomen - Testing Package Solutions: Business as usual? - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Testing Package Solutions: Business as usual? by Tim Koomen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Challenges in Assessing Technical Debt based on Dynamic Runtime DataQAware GmbH
SEAA/SEaTeD 2018, Prague (Czech Republic): Talk by Marcus Ciolkowski (Principal IT Consultant at QAware), Liliana Guzmán, Adam Trendowicz, Anna Maria Vollmer
Abstract:
Existing definitions and metrics of technical debt (TD) tend to focus on static properties of software artifacts, in particular on code measurement. Our experience from software renovation projects is that dynamic aspects --- runtime indicators of TD --- often play a major role. This talk summarizes a position paper for SEAA 2018, in which we present insights and solution ideas gained from numerous software renovation projects at QAware and from a series of interviews held as part of the ProDebt research project. We interviewed ten practitioners from two German software companies in order to understand current requirements and potential solutions to current problems regarding TD. Based on the interview results, we motivate the need for measuring dynamic indicators of TD from the practitioners' perspective, including current practical challenges. We found that the main challenges include a lack of production-ready measurement tools for runtime indicators, the definition of proper metrics and their thresholds, as well as the interpretation of these metrics in order to understand the actual debts and derive countermeasures. Measuring and interpreting dynamic indicators of TD is especially difficult to implement for companies because the related metrics are highly dependent on runtime context and thus difficult to generalize. We also sketch initial solution ideas by presenting examples of dynamic indicators for TD and outline directions for future work.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Tim Koomen - Testing Package Solutions: Business as usual? - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Testing Package Solutions: Business as usual? by Tim Koomen. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
Challenges in Assessing Technical Debt based on Dynamic Runtime DataQAware GmbH
SEAA/SEaTeD 2018, Prague (Czech Republic): Talk by Marcus Ciolkowski (Principal IT Consultant at QAware), Liliana Guzmán, Adam Trendowicz, Anna Maria Vollmer
Abstract:
Existing definitions and metrics of technical debt (TD) tend to focus on static properties of software artifacts, in particular on code measurement. Our experience from software renovation projects is that dynamic aspects --- runtime indicators of TD --- often play a major role. This talk summarizes a position paper for SEAA 2018, in which we present insights and solution ideas gained from numerous software renovation projects at QAware and from a series of interviews held as part of the ProDebt research project. We interviewed ten practitioners from two German software companies in order to understand current requirements and potential solutions to current problems regarding TD. Based on the interview results, we motivate the need for measuring dynamic indicators of TD from the practitioners' perspective, including current practical challenges. We found that the main challenges include a lack of production-ready measurement tools for runtime indicators, the definition of proper metrics and their thresholds, as well as the interpretation of these metrics in order to understand the actual debts and derive countermeasures. Measuring and interpreting dynamic indicators of TD is especially difficult to implement for companies because the related metrics are highly dependent on runtime context and thus difficult to generalize. We also sketch initial solution ideas by presenting examples of dynamic indicators for TD and outline directions for future work.
The Automation Firehose: Be Strategic & Tactical With Your Mobile & Web TestingPerfecto by Perforce
The widespread adoption of test automation has created many challenges — for everything from development lifecycle integration to scripting strategy.
One pitfall of automation is that teams often rush to automate everything they can. This is the automation firehose.
However, just because a scenario CAN be automated does not mean it SHOULD be automated. For scenarios that should be automated, teams must adopt implementation plans to ensure tests are reliable and deriving value.
Join this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, along with Thomas Haver, Manager of Automation & Delivery. In this session, the audience will:
-Understand which test scenarios to automate.
-Learn how to maximize the benefits of automation.
-Receive a checklist to determine automation feasibility and ROI.
Automated Software Testing Framework Training by Quontra SolutionsQuontra Solutions
Learn through Experience -- We differentiate our training and development program by delivering Role-Based training instead of Product-based training. Ultimately, our goal is to deliver the best IT Training to our clients.
In this training, attendees learn:
Introduction to Automation
• What is automation
• Advantages of automation & Disadvantages of automation
• Different types of Automation Tools
• What to automate in projects
• When to start automation. Scope for automation testing in projects
• About open-source automation tools
Introduction to Selenium
• What is selenium
• Why selenium
• Advantage and Disadvantages of selenium
Selenium components
• Selenium IDE
• Selenium RC
• Selenium WebDriver
• Selenium Grid
Selenium IDE
• Introduction to IDE
• IDE Installation
• Installation and uses of Firepath, Firebug & Debug bar
• Property & value of elements
• Selenium commands
• Assertions & Verification
• Running, pausing and debugging script
• Disadvantages of selenium IDE
• How to convert selenium IDE Scripts into other languages
Locators
• Tools to identify elements/objects
• Firebug
• IE Developer tools
• Google Chrome Developer tools
• Locating elements by ID
• Finding elements by name
• Finding elements by link text
• Finding elements by XPath
• Finding Elements by using CSS
• Summary
Selenium RC
• What is selenium RC
• Advantages of RC, Architecture
• What is Eclipse/IntelliJ, Selenium RC configure with Eclipse/IntelliJ
• Creating, running & debugging RC scripts
Java Concepts
• Introduction to OOPs concepts and Java
• Installation: Java, Eclipse/IntelliJ, selenium, TestNg/JUnit
• operators in java
• Data types in java
• Conditional statements in java
• Looping statements in java
• Output statements in java
• Classes & Objects
• Collection Framework
• Regular Expressions
• Exception Handling
• Packages, Access Specifiers /Modifiers
• String handling
• Log4J for logging
Selenium Web Driver with Java
• Introduction to WebDriver
• Advantages
• Different between RC and WebDriver
• Selenium WebDriver- commands
• Generate scripts in Eclipse/IntelliJ. Run Test Scripts.
• Debugging Test Script
• Database Connections
• Assertions, validations
• Working with Excel
• Pass the data from Excel
• Working with multiple browser
• Window Handling, Alert/confirm & Popup Handling
• Mouse events
• Wait mechanism
• Rich Web Handling: Calendar handing, Auto suggest, Ajax, browser forward/back navigation, keyboard events, certificate handling, event listeners
TestNg/JUnit Framework
• What is TestNg/JUnit
• Integrate the Selenium Scripts and Run from TestNg/JUnit
• Reporting Results and Analysis
• Run Scripts from multiple programs
• Parallel running using TestNg/JUnit
Automation Framework development in Agile testing
• Introduction to Frame W
Best Way to Prepare for the ISTQB Technical Test Analyst (CTAL-TTA) Certifica...Meghna Arora
Start Here---> http://bit.ly/2WtdgTj <---Get complete detail on CTAL-TTA exam guide to crack Technical Test Analyst. You can collect all information on CTAL-TTA tutorial, practice test, books, study material, exam questions, and syllabus. Firm your knowledge on Technical Test Analyst and get ready to crack CTAL-TTA certification. Explore all information on CTAL-TTA exam with the number of questions, passing percentage, and time duration to complete the test.
Matthias Ratert - Automated Test Case Prioritization - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Automated Test Case Prioritization by Matthias Ratert. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Paper presentation for my MSc @ UOM. The paper was "Model-Driven Testing with UML 2.0", Zhen Ru Dai Fraunhofer FOKUS, Kaiserin-Augusta-Allee 31, 10589 Berlin, Germany dai@fokus.fraunhofer.de
Erfaringer fra Sparebank 1 med bruk av syntetisk testdata fra Tenor for api-testautomatisering.
Presentert på et fagseminar arrangert av Skatteetaten 27.apr.2023
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Globus Compute wth IRI Workflows - GlobusWorld 2024
Testify smart testoptimization-ecfeed
1. 03.12.2021 1
Testify AS
Testify - ML-enthusiasts
Shuai Wang - Senior Test Engineer
PhD. from Simula Research Lab & UiO
Search-based software testing; Model-
based testing; Machine learning based
testing.
Minh Nguyen - Principal Test Engineer
PhD. from NTNU
Test Automation; Model-based
testing; Machine Learning based
testing.
2. 03.12.2021 2
Model-based testing (MBT)
Testify AS
Requirements
Model Test
Oracle
Test Specification
(abstract test
cases)
Test Script
(executable
test cases)
SUT
Manual
Automated
specification
Automated
derivation
Automated
generation
Automated
execution
Automated
evaluation
UML
Java
XML
etc.
(+) Automatic and systematic
generation of test cases
(+) Adjustable test coverage level
(+) Traceability from requirement to
test case
(+) Low test maintenance cost
(-) Complex modeling notations
(-) High lisence cost – tightly
integrated with comprehensive
tool sets
(-) Often support test generation
only
4. Problem definition
Concept and implementation
Further work
03.12.2021 4
Agenda
Testify AS
5. 03.12.2021 5
Problem definition
Testify AS
03.12.2021 5
Testify AS
Test case optimization:
- Input:
- A set of test cases to be executed
- Historical test cases execution data
- Output: An optimal set of test cases based on
pre-defined cost and effectiveness measures for a
given context.
That includes:
Test case selection
Test suite minimization
Test case prioritization
TC1
TC2
TC3
TC4
TC5
TC6
TC7
TC8
TC9
TC10
TC11
TC12
TC13
TC14
TC15
TC1
TC5
TC7
TC10
TC13
TC12
TC15
TC2
Changes
Optimal set
TC5 TC1 TC7
TC12 TC2
Optimal and ordered set
6. 03.12.2021 6
ecFeed Platform http://ecfeed.com/
Model
Intuitive,
powerful and
expressive
Modeling
Test cases
Intelligent algorithms,
Optimal or scalable
test coverage
Test Generation
Test Runners
Test Execution
Standard or
customized formats
Data Export
Standard or customized
test execution data points
Collect & Analysis
7. Concept and Implementation
03.12.2021 Testify AS 7
Customer SUTs
Test execution
data points
Execute test cases
(step 1)
Data repository (step 2)
Test optimization
applications (step 3)
Support smart
testing
Execution time
Execution verdict (pass/fail)
Detailed fault info
Coverage (e.g., code)
Configuration info
…
Cost and effectiveness measures
Data characteristics
Data statistics
…
Test case prioritization
Test suite minimization
Test selection
…
Store and analyze
execution data
Input historical data
8. 03.12.2021 8
Search-based test case prioritization
Testify AS
Fitness
Function
https://www.researchgate.net/publication/228671024_Search_Based_Software_Engineering_A_Comprehensive_Analysis_and_Revie
w_of_Trends_Techniques_and_Applications
9. • Cost Measure:
TET: Total execution time for the prioritized test cases, 𝑇𝐸𝑇 = 𝑖=1
𝑛𝑡𝑠
𝐸𝑇𝑡𝑐𝑖
• Effectiveness Measures:
PD: Prioritization density to measure how many test cases have been prioritized, 𝑃𝐷 =
𝑛𝑡𝑠
𝑛𝑡
FDC: Fault detection capability, 𝐹𝐷𝐶 = 𝑖=1
𝑛𝑡𝑠 𝐹𝑎𝑖𝑙𝑅𝑡𝑐𝑖
𝑛𝑡𝑠
o Rate of fail executions within given period/context (e.g., a week, cycle, sprint)
• The three objectives are integrated into various search algorithms such
as Non-dominated Sorting Genetic Algorithm II (NSGA-II)
• The technique was developed based on open-source multi-objective
optimization framework jMetal (http://jmetal.sourceforge.net/algorithms.html)
03.12.2021 9
Search-based test case prioritization
Testify AS
10. Conclusion
Applied intelligent search algorithms to solve the test optimization problem
(test case prioritization).
Based on constructed historical test execution data.
Future
Ongoing: extract real historical test execution data extracted from ecFeed.
Near future:
Apply other machine learning techniques (e.g., reinforcement learning).
Seek more comprehensive industrial customers (case studies).
03.12.2021 10
Wrap-up
Testify AS