Companion slides for the pipeline reproducibility meetup at Adsmurai. See links for code.
https://github.com/Adsmurai/dvc-meetup
https://www.meetup.com/BCN-DL-School/events/268262173/
StarWest 2019 - End to end testing: Stupid or Legit?mabl
This document discusses end-to-end testing and how traditional testing tools are not keeping up with modern development practices like continuous integration and delivery (CI/CD). It introduces the concept of using a testing platform called mabl that enables easy creation of reusable modular end-to-end tests. Mabl provides features like root cause analysis, visual testing, performance monitoring and data-driven parameterized testing to help scale testing in a DevTestOps environment. A live demo is shown of creating data-driven tests using mabl.
Slides by Anton Hristov, Product Manager of mabl.
Watch the accompanying webinar: https://www.mabl.com/blog/end-to-end-automation-at-scale
Testing end-to-end user scenarios is challenging, yet more important than ever due to increased complexity, variety and importance of user interfaces. Delivering a quality user experience requires taking a holistic view of the end-to-end user journey, which can span across applications, browsers, devices and different modes of interaction such as touch and voice.
In this webinar, we will explore different ways mabl can help you create intelligent end-to-end tests that focus on the user journey and run at scale across browsers. No scripting necessary.
Learning outcomes:
Why we need to shift from quality assurance to quality intelligence
How to create intelligent tests quickly to increase coverage
What diagnostics information is available for root-cause analysis
When and how to reuse a set of steps across multiple tests
When and how to apply data-driven (parameterized) testing approach
Scaling Ride-Hailing with Machine Learning on MLflowDatabricks
"GOJEK, the Southeast Asian super-app, has seen an explosive growth in both users and data over the past three years. Today the technology startup uses big data powered machine learning to inform decision-making in its ride-hailing, lifestyle, logistics, food delivery, and payment products. From selecting the right driver to dispatch, to dynamically setting prices, to serving food recommendations, to forecasting real-world events. Hundreds of millions of orders per month, across 18 products, are all driven by machine learning.
Building production grade machine learning systems at GOJEK wasn't always easy. Data processing and machine learning pipelines were brittle, long running, and had low reproducibility. Models and experiments were difficult to track, which led to downstream problems in production during serving and model evaluation. In this talk we will cover these and other challenges that we faced while trying to scale end-to-end machine learning systems at GOJEK. We will then introduce MLflow and explore the key features that make it useful as part of an ML platform. Finally, we will show how introducing MLflow into the ML life cycle has helped to solve many of the problems we faced while scaling machine learning at GOJEK.
"
Start with version control and experiments management in machine learningMikhail Rozhkov
How to manage complexity and reproducibility of Machine Learning projects? What requirements and tools? How to apply in your company and projects? Let's start with data and model version control! Review Data Version Control (DVC), MLFlow and other tools
QA Meetup at Signavio (Berlin, 06.06.19)Anesthezia
The document discusses establishing the architecture for an end-to-end testing project. It outlines key components like the core test structure following the Arrange-Act-Assert pattern, test data preparation, reporting with Allure, managing properties with Typesafe Config, dependency injection with Guice, executing tests on CI with Jenkins, and deploying test environments with Docker. The presenter will demonstrate establishing backend testing first before expanding to UI testing.
The document discusses strategies for migrating from SharePoint 2007 to SharePoint 2010. It covers common migration problems, technical changes in SharePoint 2010, governance best practices, available migration options including database attach and hybrid approaches, tools for migration, and recommendations for the migration process including planning, testing, and production deployment.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
QA Fest 2015. Владимир Примаков. Процесс нагрузочного тестирования и его план...QAFest
В этом докладе я хочу поделиться с вами подходом по планированию, организации, и проведению нагрузочного тестирования, выработанным и систематизированным на основании опыта проведения сервисов по нагрузочному тестированию более чем для дюжины проектов и систем различного масштаба. Основной акцент будет сделан на ряд тонких и важных моментов/нюансов обязательных для проведения нагрузочного тестирования полноценным и адекватным образом
StarWest 2019 - End to end testing: Stupid or Legit?mabl
This document discusses end-to-end testing and how traditional testing tools are not keeping up with modern development practices like continuous integration and delivery (CI/CD). It introduces the concept of using a testing platform called mabl that enables easy creation of reusable modular end-to-end tests. Mabl provides features like root cause analysis, visual testing, performance monitoring and data-driven parameterized testing to help scale testing in a DevTestOps environment. A live demo is shown of creating data-driven tests using mabl.
Slides by Anton Hristov, Product Manager of mabl.
Watch the accompanying webinar: https://www.mabl.com/blog/end-to-end-automation-at-scale
Testing end-to-end user scenarios is challenging, yet more important than ever due to increased complexity, variety and importance of user interfaces. Delivering a quality user experience requires taking a holistic view of the end-to-end user journey, which can span across applications, browsers, devices and different modes of interaction such as touch and voice.
In this webinar, we will explore different ways mabl can help you create intelligent end-to-end tests that focus on the user journey and run at scale across browsers. No scripting necessary.
Learning outcomes:
Why we need to shift from quality assurance to quality intelligence
How to create intelligent tests quickly to increase coverage
What diagnostics information is available for root-cause analysis
When and how to reuse a set of steps across multiple tests
When and how to apply data-driven (parameterized) testing approach
Scaling Ride-Hailing with Machine Learning on MLflowDatabricks
"GOJEK, the Southeast Asian super-app, has seen an explosive growth in both users and data over the past three years. Today the technology startup uses big data powered machine learning to inform decision-making in its ride-hailing, lifestyle, logistics, food delivery, and payment products. From selecting the right driver to dispatch, to dynamically setting prices, to serving food recommendations, to forecasting real-world events. Hundreds of millions of orders per month, across 18 products, are all driven by machine learning.
Building production grade machine learning systems at GOJEK wasn't always easy. Data processing and machine learning pipelines were brittle, long running, and had low reproducibility. Models and experiments were difficult to track, which led to downstream problems in production during serving and model evaluation. In this talk we will cover these and other challenges that we faced while trying to scale end-to-end machine learning systems at GOJEK. We will then introduce MLflow and explore the key features that make it useful as part of an ML platform. Finally, we will show how introducing MLflow into the ML life cycle has helped to solve many of the problems we faced while scaling machine learning at GOJEK.
"
Start with version control and experiments management in machine learningMikhail Rozhkov
How to manage complexity and reproducibility of Machine Learning projects? What requirements and tools? How to apply in your company and projects? Let's start with data and model version control! Review Data Version Control (DVC), MLFlow and other tools
QA Meetup at Signavio (Berlin, 06.06.19)Anesthezia
The document discusses establishing the architecture for an end-to-end testing project. It outlines key components like the core test structure following the Arrange-Act-Assert pattern, test data preparation, reporting with Allure, managing properties with Typesafe Config, dependency injection with Guice, executing tests on CI with Jenkins, and deploying test environments with Docker. The presenter will demonstrate establishing backend testing first before expanding to UI testing.
The document discusses strategies for migrating from SharePoint 2007 to SharePoint 2010. It covers common migration problems, technical changes in SharePoint 2010, governance best practices, available migration options including database attach and hybrid approaches, tools for migration, and recommendations for the migration process including planning, testing, and production deployment.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
QA Fest 2015. Владимир Примаков. Процесс нагрузочного тестирования и его план...QAFest
В этом докладе я хочу поделиться с вами подходом по планированию, организации, и проведению нагрузочного тестирования, выработанным и систематизированным на основании опыта проведения сервисов по нагрузочному тестированию более чем для дюжины проектов и систем различного масштаба. Основной акцент будет сделан на ряд тонких и важных моментов/нюансов обязательных для проведения нагрузочного тестирования полноценным и адекватным образом
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...PAPIs.io
When making machine learning applications in Uber, we identified a sequence of common practices and painful procedures, and thus built a machine learning platform as a service. We here present the key components to build such a scalable and reliable machine learning service which serves both our online and offline data processing needs.
The document provides wireframes and workflows for a CCS DDS UI. It includes screens and flows for makers to create views from data sources, add metadata, upload Python scripts, validate data, and send views to checkers. It also includes screens and flows for checkers to get view data, promote views between environments, and schedule view deployments. It discusses challenges with real-time/near real-time data and notes that manual tasks include uploading new source/attribute metadata and validating view data. Validation and maintenance tasks would require SQL, Python, Git, and BigTable skills from resources.
The document outlines the testing strategy and best practices for the Product Array project. It discusses using Selenium and HTMLUnit for functional testing, with HTMLUnit favored for backend testing and Selenium for richer UI. It recommends building a long-term regression test base in continuous integration. Challenges around test maintainability and coverage are discussed. Test design patterns like page object and data-driven testing are recommended. Behavior-driven development is introduced to close the gap between specifications and tests. Code examples show how tests can move from verifying functions to illustrating user stories.
Reproducibility and experiments management in Machine Learning Mikhail Rozhkov
Machine Learning becomes more and more common practice in many companies. ML teams size is growing and collaboration goes out of office and personal laptops. The complexity of ML projects leads to adopting distributed team collaboration, cloud based infrastructure and distributed machine learning. Well defined and manageable process for ML experiments becomes a central issue. Practices to apply automated pipelines, models and data set versioning helps to establish a good manageable process in project and provide reproducible results.
This speech helps to start with handling models and datasets versioning using open source tools: DVC, mlflow, Luigi, etc.
A00-440: Useful Questions for SAS ModelOps Specialist Certification SuccessPalakMazumdar1
Click Here---> https://bit.ly/3oX5ZLF <---Get complete detail on A00-440 study guide to crack SAS Certified ModelOps Specialist. You can collect all information on A00-440 tutorial, practice test, books, study material, exam questions, and syllabus. Enhance your knowledge on SAS ModelOps Specialist and get ready to crack A00-440 certification in no time. Explore all information on A00-440 exam with number of questions, passing percentage and time duration to complete test.
Presentation to client management on the Why's and What's around the AppDNA product. Why is this product important and What are its capabilities for your business.
This document provides an overview of BDD test automation. It discusses the benefits of BDD test automation such as faster script development and improved collaboration. It then compares BDD to TDD, highlighting differences in parameters like test focus, participants, and ease of adoption. Two BDD frameworks, Cucumber-JVM and J-Behave, are compared based on available features. The document also covers test automation suites, data management, test automation processes, and service virtualization solutions.
The acute software testing process, tools we use and tools we\'ve developed. We test with both open source and licensed-based products, such as Selenium and Mercury.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
A survey on Machine Learning In Production (July 2018)Arnab Biswas
What does Machine Learning In Production mean? What are the challenges? How organizations like Uber, Amazon, Google have built their Machine Learning Pipeline? A survey of the Machine Learning In Production Landscape as of July 2018
Michael will present an overview of Elastic's machine learning capabilities.
As we know, data science work can be messy, fractured, and challenging as data volumes increase. This session will explore how the Elastic stack can offer a single destination for data ingestion and exploration, time series modeling, and communication of results through data visualizations by focusing on a few sample data sources.
We will also explore new functionality offered by Elastic machine learning, in particular an integration with our APM solution.
Trained as a mathematician, Michael Hirsch started his career with no development experience. His first task - "model the world in a relational database." Over the last 7 years Michael has established himself a data scientist, with a focus on building end-to-end systems. In his career, he has built machine learning powered platforms for clients including Nike, Samsung, and Marvel, and approaches his work with the idea that machine learning is only as useful as the interfaces that users interact with.
Currently, Michael is a Product Engineer for Machine Learning at Elastic. He focuses on tailoring Elastic's ML offering to customer use cases, as well as integrating machine learning capabilities across the entire Elastic Stack.
Loading a lot of data into a graph database is not a trivial exercise. TypeDB Loader (formerly known as GraMi) was developed to allow large-scale data import into TypeDB, a strongly-typed database. Recent improvements have immensely simplified the configuration interface to allow for easier data importing, while maintaining features and the promise of loading huge amounts of data into TypeDB as fast as possible.
Do compilers look anything like a data pipeline? How do you do data testing to ensure end to end provenance and enforce engineering guarantees for your data products? What babysteps should you consider when assembling your team?
Strategy-driven Test Generation with Open Source FrameworksDimitry Polivaev
Test suites for complex software systems contain thousands of test cases. Keeping track on the test coverage and changing the test suite as the system requirements evolve can consume significant efforts. The tutorial introduces and demonstrates an effort saving technique for developing, controlling and modifying test suites in agile, efficient, scalable and flexible way. The technique allows complete and explicit control over test amount, test depth and test coverage. It also makes possible to avoid code duplication in the non-generated test artifacts.
This technique allows generation of complete test suites given a specification describing test categories, test flow variations, test input data variations and requirement coverage criteria. All these kinds of data are commonly referred to as test properties. Their dependencies and variations are defined in test strategies.
The test strategies are expressed in a test strategy DSL which allows to express complex dependencies in a concise and easily understandable way. Behind the scene there is a rule engine generating test property value combinations from the test strategy definitions. The test suites containing independently executable test cases can be generated in any programming or scripting language or in a textual form. The generator uses a generic and an algorithm for mapping of test properties to the test scripts based on property naming conventions. For automatic test case execution a separate test driver component containing definition of single test steps referenced by the strategy should be written specifically in the chosen test script language.
All tools used for strategy-driven test generation are freely available under open source licenses.
Performance testing in scope of migration to cloud by Serghei RadovValeriia Maliarenko
This document discusses performance testing considerations for migrating an application to the cloud. It covers cloud computing principles like multi-tenancy and horizontal scalability. Challenges like over-provisioning and network issues are addressed. Effective provisioning using predictive auto-scaling is recommended. Tools for monitoring, load testing, and analyzing results are presented, including New Relic, DataDog, Flood.io, and JMeter. The document emphasizes defining acceptance criteria, workload characterization, and iterating on tests to analyze and scale resources. Costs of various performance testing tools on cloud providers are compared.
Wix' internal ML Platform, whose mission is to allow data scientists and analysts at Wix to build, deploy, maintain, and monitor machine learning models in production with minimal engineering efforts
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...SQUADEX
This document provides an overview of machine learning tooling on AWS, including data pipelines, modeling and training, and deployment. It discusses AWS products for streaming and batch data ingestion, machine learning services like Amazon Machine Learning, Amazon SageMaker, and AWS Deep Learning AMIs. It also provides best practices for notebooks, model maintenance, and ML lifecycle management using tools like MLFlow and KubeFlow. The document concludes that while AWS provides a strong foundation, operations require additional layers for successful and reproducible machine learning.
Have you ever wondered what the best way would be to test emails? Or how you would go about testing a messaging queue?
Making sure your components are correctly interacting with each other is both a tester and developer’s concern. Join us to get a better understanding of what you should test and how, both manually and automated.
This session is the first ever in which we will have two units working together to give you a nuanced insight on all aspects of integration testing. We’ll start off exploring the world of integration testing, defining the terminology, and creating a general understanding of what phases and kinds of testing exist. Later on we’ll delve into integration test automation, ranging from database integration testing to selenium UI testing and even as far as LDAP integration testing.
We have a wide variety of demos prepared where we will show you how easy it is to test various components of your infrastructure. Some examples:
- Database testing (JPA)
- Arquillian, exploring container testing, EJB testing and more
- Email testing
- SOAP testing using SoapUI
- LDAP testing
- JMS testing
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Building machine learning service in your business — Eric Chen (Uber) @PAPIs ...PAPIs.io
When making machine learning applications in Uber, we identified a sequence of common practices and painful procedures, and thus built a machine learning platform as a service. We here present the key components to build such a scalable and reliable machine learning service which serves both our online and offline data processing needs.
The document provides wireframes and workflows for a CCS DDS UI. It includes screens and flows for makers to create views from data sources, add metadata, upload Python scripts, validate data, and send views to checkers. It also includes screens and flows for checkers to get view data, promote views between environments, and schedule view deployments. It discusses challenges with real-time/near real-time data and notes that manual tasks include uploading new source/attribute metadata and validating view data. Validation and maintenance tasks would require SQL, Python, Git, and BigTable skills from resources.
The document outlines the testing strategy and best practices for the Product Array project. It discusses using Selenium and HTMLUnit for functional testing, with HTMLUnit favored for backend testing and Selenium for richer UI. It recommends building a long-term regression test base in continuous integration. Challenges around test maintainability and coverage are discussed. Test design patterns like page object and data-driven testing are recommended. Behavior-driven development is introduced to close the gap between specifications and tests. Code examples show how tests can move from verifying functions to illustrating user stories.
Reproducibility and experiments management in Machine Learning Mikhail Rozhkov
Machine Learning becomes more and more common practice in many companies. ML teams size is growing and collaboration goes out of office and personal laptops. The complexity of ML projects leads to adopting distributed team collaboration, cloud based infrastructure and distributed machine learning. Well defined and manageable process for ML experiments becomes a central issue. Practices to apply automated pipelines, models and data set versioning helps to establish a good manageable process in project and provide reproducible results.
This speech helps to start with handling models and datasets versioning using open source tools: DVC, mlflow, Luigi, etc.
A00-440: Useful Questions for SAS ModelOps Specialist Certification SuccessPalakMazumdar1
Click Here---> https://bit.ly/3oX5ZLF <---Get complete detail on A00-440 study guide to crack SAS Certified ModelOps Specialist. You can collect all information on A00-440 tutorial, practice test, books, study material, exam questions, and syllabus. Enhance your knowledge on SAS ModelOps Specialist and get ready to crack A00-440 certification in no time. Explore all information on A00-440 exam with number of questions, passing percentage and time duration to complete test.
Presentation to client management on the Why's and What's around the AppDNA product. Why is this product important and What are its capabilities for your business.
This document provides an overview of BDD test automation. It discusses the benefits of BDD test automation such as faster script development and improved collaboration. It then compares BDD to TDD, highlighting differences in parameters like test focus, participants, and ease of adoption. Two BDD frameworks, Cucumber-JVM and J-Behave, are compared based on available features. The document also covers test automation suites, data management, test automation processes, and service virtualization solutions.
The acute software testing process, tools we use and tools we\'ve developed. We test with both open source and licensed-based products, such as Selenium and Mercury.
The PAC aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing.
Since its beginning, the PAC is designed to connect performance experts during a single event. In June, during 24 hours, 20 participants convened exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
A survey on Machine Learning In Production (July 2018)Arnab Biswas
What does Machine Learning In Production mean? What are the challenges? How organizations like Uber, Amazon, Google have built their Machine Learning Pipeline? A survey of the Machine Learning In Production Landscape as of July 2018
Michael will present an overview of Elastic's machine learning capabilities.
As we know, data science work can be messy, fractured, and challenging as data volumes increase. This session will explore how the Elastic stack can offer a single destination for data ingestion and exploration, time series modeling, and communication of results through data visualizations by focusing on a few sample data sources.
We will also explore new functionality offered by Elastic machine learning, in particular an integration with our APM solution.
Trained as a mathematician, Michael Hirsch started his career with no development experience. His first task - "model the world in a relational database." Over the last 7 years Michael has established himself a data scientist, with a focus on building end-to-end systems. In his career, he has built machine learning powered platforms for clients including Nike, Samsung, and Marvel, and approaches his work with the idea that machine learning is only as useful as the interfaces that users interact with.
Currently, Michael is a Product Engineer for Machine Learning at Elastic. He focuses on tailoring Elastic's ML offering to customer use cases, as well as integrating machine learning capabilities across the entire Elastic Stack.
Loading a lot of data into a graph database is not a trivial exercise. TypeDB Loader (formerly known as GraMi) was developed to allow large-scale data import into TypeDB, a strongly-typed database. Recent improvements have immensely simplified the configuration interface to allow for easier data importing, while maintaining features and the promise of loading huge amounts of data into TypeDB as fast as possible.
Do compilers look anything like a data pipeline? How do you do data testing to ensure end to end provenance and enforce engineering guarantees for your data products? What babysteps should you consider when assembling your team?
Strategy-driven Test Generation with Open Source FrameworksDimitry Polivaev
Test suites for complex software systems contain thousands of test cases. Keeping track on the test coverage and changing the test suite as the system requirements evolve can consume significant efforts. The tutorial introduces and demonstrates an effort saving technique for developing, controlling and modifying test suites in agile, efficient, scalable and flexible way. The technique allows complete and explicit control over test amount, test depth and test coverage. It also makes possible to avoid code duplication in the non-generated test artifacts.
This technique allows generation of complete test suites given a specification describing test categories, test flow variations, test input data variations and requirement coverage criteria. All these kinds of data are commonly referred to as test properties. Their dependencies and variations are defined in test strategies.
The test strategies are expressed in a test strategy DSL which allows to express complex dependencies in a concise and easily understandable way. Behind the scene there is a rule engine generating test property value combinations from the test strategy definitions. The test suites containing independently executable test cases can be generated in any programming or scripting language or in a textual form. The generator uses a generic and an algorithm for mapping of test properties to the test scripts based on property naming conventions. For automatic test case execution a separate test driver component containing definition of single test steps referenced by the strategy should be written specifically in the chosen test script language.
All tools used for strategy-driven test generation are freely available under open source licenses.
Performance testing in scope of migration to cloud by Serghei RadovValeriia Maliarenko
This document discusses performance testing considerations for migrating an application to the cloud. It covers cloud computing principles like multi-tenancy and horizontal scalability. Challenges like over-provisioning and network issues are addressed. Effective provisioning using predictive auto-scaling is recommended. Tools for monitoring, load testing, and analyzing results are presented, including New Relic, DataDog, Flood.io, and JMeter. The document emphasizes defining acceptance criteria, workload characterization, and iterating on tests to analyze and scale resources. Costs of various performance testing tools on cloud providers are compared.
Wix' internal ML Platform, whose mission is to allow data scientists and analysts at Wix to build, deploy, maintain, and monitor machine learning models in production with minimal engineering efforts
Tooling for Machine Learning: AWS Products, Open Source Tools, and DevOps Pra...SQUADEX
This document provides an overview of machine learning tooling on AWS, including data pipelines, modeling and training, and deployment. It discusses AWS products for streaming and batch data ingestion, machine learning services like Amazon Machine Learning, Amazon SageMaker, and AWS Deep Learning AMIs. It also provides best practices for notebooks, model maintenance, and ML lifecycle management using tools like MLFlow and KubeFlow. The document concludes that while AWS provides a strong foundation, operations require additional layers for successful and reproducible machine learning.
Have you ever wondered what the best way would be to test emails? Or how you would go about testing a messaging queue?
Making sure your components are correctly interacting with each other is both a tester and developer’s concern. Join us to get a better understanding of what you should test and how, both manually and automated.
This session is the first ever in which we will have two units working together to give you a nuanced insight on all aspects of integration testing. We’ll start off exploring the world of integration testing, defining the terminology, and creating a general understanding of what phases and kinds of testing exist. Later on we’ll delve into integration test automation, ranging from database integration testing to selenium UI testing and even as far as LDAP integration testing.
We have a wide variety of demos prepared where we will show you how easy it is to test various components of your infrastructure. Some examples:
- Database testing (JPA)
- Arquillian, exploring container testing, EJB testing and more
- Email testing
- SOAP testing using SoapUI
- LDAP testing
- JMS testing
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
7. Index
● Explicit data and process dependencies
● Data and model caching
● Visualize metrics across model and data versions
● “One click” pipeline reproducibility
● 🍻 🍕
8. Explicit data and process dependencies
raw
Prepare
data
prepared
train
prepared
test
Extract
features
features
train
features
test
Select
model
Test
model
model metrics
19. Explicit data and process dependencies
$ dvc pipeline show --ascii select_model.dvc
$ dvc pipeline show --ascii --outs select_model.dvc
20. Data and model caching
raw
Prepare
data
prepared
train
prepared
test
Extract
features
features
train
features
test
Select
model
Test
model
model metrics
21. Data and model caching
raw
Prepare
data
prepared
train
prepared
test
Extract
features
features
train
features
test
Select
model
Test
model
model metrics
CHANGE HERE
22. Data and model caching
raw
Prepare
data
prepared
train
prepared
test
Extract
features
features
train
features
test
Select
model
Test
model
model metrics
CHANGE HERE
$ dvc repro test_model.dvc