This document discusses challenges with using test data and how a digitalization platform can help overcome those challenges. It summarizes that traditional quality management approaches are not well-suited for highly dynamic electronics manufacturing data. The platform allows for automated collection of integrated global test and repair data to provide real-time dashboards and trigger automation. This provides insights into true yield, top failures, retests, and limits to enable high-impact continuous improvements with a good cost-benefit ratio.
- Traditional quality management approaches like statistical process control (SPC) are ineffective for analyzing highly dynamic test data from electronics manufacturing due to assumptions of stable processes and limited dimensions in the data.
- Collecting test data in multiple formats from globally distributed sources makes it difficult to compare, correlate, and discover problems in real-time. Existing homegrown solutions have poor performance and high maintenance costs.
- A high-level approach using automated uniform data collection, integrated repair data, real-time dashboards, and trigger-based automation can provide insights into true first pass yield, top failures, retests, process capability, and drive high-impact improvements.
Next Step for Virtualization: Pre-production Testingstacksafe
IT organizations struggle to perform efficient change and release management. ITIL best practice guidelines clearly identify pre-production testing as a very important step to improve change and release management efficiency. Effective pre-production testing includes building realistic staging environments for IT testing, testing all changes that are targeted for production, and testing the impact of a change across the entire software infrastructure stack from end-to-end. This is sound advice in theory, but IT professionals face a ‘Perfect Storm’ of challenges:
• Production system availability expectations are extremely high
• IT supports very complex production environments with multiple inter-dependencies
• the volume, diversity, and acceleration of changes requested of production systems is overwhelming
IT organizations that adopt a structured change and release management approach also enjoy a smoother, more mature, change process. ‘Second generation’ virtualization serves as an enabling technology upon which to create a structured approach to improve change and release management maturity. It introduces both benefits and challenges of its own. While virtualization is not a complete answer for change and release management, it offers promise that IT organizations should consider.
Takeaways:
• IT Operations faces a 'perfect storm' of extreme availability demands, highly interdependent systems, voluminous and accelerated change, and confusing system complexity
• Structured change management processes according to ITIL best practices includes setting up a staging environment for IT testing, testing all changes, and testing changes from end-to-end across the infrastructure.
• For companies that have adopted structured Change Management processes, Change drives 25-30% of incidents in production. For companies with less structured processes, Change drives 75-80% of incidents in production
• Virtualization can improve efficiency of pre-production preparation and the adequacy of the testing of changes, but it comes with its own costs and challenges.
Production analysis application_ SM Infocommmukul soni
SM Infocom proposes a Production Analysis Application (PAA) to address challenges with complex production projects including time delays in addressing issues, a lack of insights from large amounts of data, and overreliance on manual processes. The PAA would consolidate production data from machines in real-time, perform analyses, and generate customizable reports to facilitate preventative action, improve resource utilization, reduce downtime, and increase output quality. Benefits of the PAA include real-time solutions, increased productivity and efficiency, better planning and forecasting, and reduced production costs.
The document discusses the evolution and components of CASE (computer-aided software engineering) tools from 1980 to 1990, including their use of central repositories and automated design, analysis, and code generation. It also covers typical tasks for large-scale JAD (joint application development) environments and compares the traditional software development life cycle to RAD (rapid application development).
The document discusses various problem solving techniques including:
1) Breaking down problems into opportunities and new problems through scientific problem solving methods like forming hypotheses and experiments.
2) Using tools like Ishikawa charts to categorize root problems and their symptoms.
3) Representing the problem space and considering solutions on a spectrum from OK to great.
4) Decomposing systems functionally to identify subsystems.
The document discusses topics related to management information systems including typical inventory systems used, the systems development life cycle process, the importance of minimizing errors, and different approaches to systems design being either data-centric or process-centric. It also outlines various roles that analysts can take and their relationship to the organization such as being an employee, contractor, or working for an outsourcing provider.
- Traditional quality management approaches like statistical process control (SPC) are ineffective for analyzing highly dynamic test data from electronics manufacturing due to assumptions of stable processes and limited dimensions in the data.
- Collecting test data in multiple formats from globally distributed sources makes it difficult to compare, correlate, and discover problems in real-time. Existing homegrown solutions have poor performance and high maintenance costs.
- A high-level approach using automated uniform data collection, integrated repair data, real-time dashboards, and trigger-based automation can provide insights into true first pass yield, top failures, retests, process capability, and drive high-impact improvements.
Next Step for Virtualization: Pre-production Testingstacksafe
IT organizations struggle to perform efficient change and release management. ITIL best practice guidelines clearly identify pre-production testing as a very important step to improve change and release management efficiency. Effective pre-production testing includes building realistic staging environments for IT testing, testing all changes that are targeted for production, and testing the impact of a change across the entire software infrastructure stack from end-to-end. This is sound advice in theory, but IT professionals face a ‘Perfect Storm’ of challenges:
• Production system availability expectations are extremely high
• IT supports very complex production environments with multiple inter-dependencies
• the volume, diversity, and acceleration of changes requested of production systems is overwhelming
IT organizations that adopt a structured change and release management approach also enjoy a smoother, more mature, change process. ‘Second generation’ virtualization serves as an enabling technology upon which to create a structured approach to improve change and release management maturity. It introduces both benefits and challenges of its own. While virtualization is not a complete answer for change and release management, it offers promise that IT organizations should consider.
Takeaways:
• IT Operations faces a 'perfect storm' of extreme availability demands, highly interdependent systems, voluminous and accelerated change, and confusing system complexity
• Structured change management processes according to ITIL best practices includes setting up a staging environment for IT testing, testing all changes, and testing changes from end-to-end across the infrastructure.
• For companies that have adopted structured Change Management processes, Change drives 25-30% of incidents in production. For companies with less structured processes, Change drives 75-80% of incidents in production
• Virtualization can improve efficiency of pre-production preparation and the adequacy of the testing of changes, but it comes with its own costs and challenges.
Production analysis application_ SM Infocommmukul soni
SM Infocom proposes a Production Analysis Application (PAA) to address challenges with complex production projects including time delays in addressing issues, a lack of insights from large amounts of data, and overreliance on manual processes. The PAA would consolidate production data from machines in real-time, perform analyses, and generate customizable reports to facilitate preventative action, improve resource utilization, reduce downtime, and increase output quality. Benefits of the PAA include real-time solutions, increased productivity and efficiency, better planning and forecasting, and reduced production costs.
The document discusses the evolution and components of CASE (computer-aided software engineering) tools from 1980 to 1990, including their use of central repositories and automated design, analysis, and code generation. It also covers typical tasks for large-scale JAD (joint application development) environments and compares the traditional software development life cycle to RAD (rapid application development).
The document discusses various problem solving techniques including:
1) Breaking down problems into opportunities and new problems through scientific problem solving methods like forming hypotheses and experiments.
2) Using tools like Ishikawa charts to categorize root problems and their symptoms.
3) Representing the problem space and considering solutions on a spectrum from OK to great.
4) Decomposing systems functionally to identify subsystems.
The document discusses topics related to management information systems including typical inventory systems used, the systems development life cycle process, the importance of minimizing errors, and different approaches to systems design being either data-centric or process-centric. It also outlines various roles that analysts can take and their relationship to the organization such as being an employee, contractor, or working for an outsourcing provider.
Adaptive Six Sigma Case Study - MACH Teledata - 01LN Mishra CBAP
MACH Teledata is the world's largest telecom roaming settlement organization, and this case study examines how they reduced delinquency rates in processing large volumes of call record files by prioritizing tasks based on file value and age rather than just first-in-first-out. Analysis found that 75% of processing time was waiting, so the new approach reduced response times such that 94% of files were now treated within 1.5 days compared to just 62% previously. This lowered potential monetary losses for MACH by prioritizing higher value files that accounted for 80% of potential penalties.
The document discusses different types of data and analytics. It describes structured, semi-structured, and unstructured data. It also outlines categories of analytics including descriptive, predictive, discovery, and prescriptive analytics. Finally, it provides examples of applying prescriptive analytics to form hypotheses based on events and recommend actions.
One of the most powerful ways to apply advanced analytics is by putting them to work in operational systems. Using analytics to improve the way every transaction, every customer, every website visitor is handled is tremendously effective. The multiplicative effect means that even small analytic improvements add up to real business benefit.
This is the slide deck from the Webinar. James Taylor, CEO of Decision Management Solutions, and Dean Abbott of Abbott Analytics discuss 10 best practices to make sure you can effectively build and deploy analytic models into you operational systems. webinar recording available here: https://decisionmanagement.omnovia.com/archives/70931
OberservePoint - The Digital Data Quality PlaybookObservePoint
There is a big difference between having data and having correct data. But collecting correct, compliant digital data is a journey, not a destination. Here are ten steps to get you to data quality nirvana.
Corrective Action – The Heart of Continuous QualityIBS America
In the latest managing continuous improvement webinar from IBS America, Inc., attendees learned how to use Corrective Actions to improve quality, enhance interdepartmental communications, and increase customer satisfaction.
In this webinar, we covered:
-Why Corrective Actions are important
-How to get the entire enterprise involved
-What tools you can use to manage the process
Visit the IBS Blog for more information and to register for our next webinar:
http://info.ibs-us.com
Predictive analytics are increasingly a must-have competitive tool. A well-defined workflow and effective decision modeling approach ensures that the right predictive analytic models get built and deployed.
James Tomaney - Automated Testing for the ATM Channel TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Automated Testing for the ATM Channel by James Tomaney. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Value of solar remote monitoring and analytics for operational intelligenceMachinePulse
MachinePulse attended India Solar Week 2016 from June 2-3 in New Delhi. Our keynote speaker talked about how remote monitoring and analytics helps in increasing the operational efficiency of a solar plant. The presentation includes information about the company, our products and case studies.
Web Performance Analysis - TCF Pro 2009Guy Ferraiolo
This document surveys web performance analysis. It discusses goals like understanding performance results and saving time and money. Key concepts discussed include reducing uncertainty through iterative testing and analysis. The document provides advice on defining processes, using tools appropriately, and designing tests to gain the most insight. It emphasizes that performance analysis requires a serious, long-term investment but can provide meaningful results.
This document outlines the different departments and functions within a company's customer service operations. It includes a front office for direct customer interactions like a call center, monitoring, and trouble ticket handling. The back office supports the front office with activities such as periodic maintenance, service monitoring, log management, change management, test protocols, installation and delivery, disaster management, and training. Engineers in the front office help customers with issues like faults and problems.
The State of Unifying Clinical Systems, Processes, and Stakeholder CollaborationVeeva Systems
Explore new findings from the Veeva 2019 Unified Clinical Operations Survey, including drivers and barriers to unifying clinical systems and processes. Get the full report here: http://bit.ly/31wpWZ0
1) The document is a summary of a webcast about using AgVeritas, a web-based yield analysis tool, to determine optimal input rates and test new products in different management zones within a field based on spatial analysis of yield data.
2) AgVeritas accounts for multiple factors that impact yield and identifies spatial effects not explained by known factors to provide more accurate analysis results.
3) An example analysis showed that seeding rates of 32K, 32K, and 35K were optimal in different yield zones within a field based on a seeding rate trial, and that compost provided benefits in some zones.
The document discusses the Contributing and Causal (C&E) Matrix, which is used to identify contributing and causal factors of adverse events. The C&E Matrix involves defining the effect, identifying major categories, generating and evaluating ideas for causes, voting on the most likely causes, and recommending solutions. It provides a visual means to trace a problem to its causes and gain different perspectives from sharing ideas with others. The process outputs are rated based on importance, then input variables are rated on their relationship to the outputs to start a Failure Modes and Effects Analysis (FMEA) process.
This document outlines an 11-step process for quantifying the return on investment (ROI) of library technology projects. The process involves: 1) conducting an environmental scan of potential projects; 2) outlining program assumptions and objectives; 3) identifying impact points of current and new processes; 4) establishing a conversion standard; 5) collecting data on impact points; 6) estimating changes and project savings; 7) identifying project costs; 8) considering qualitative benefits; 9) calculating ROI; 10) writing a report; and 11) following up after implementation. The goal is to reduce risk, allocate resources effectively, and validate technology investment claims using both quantitative and qualitative metrics.
This document discusses achieving world-class software development through a lean management approach with low oversight that allows development teams flexibility in their processes and technologies. It promotes focusing investments only on critical issues, providing transparent reporting and assessments of risks, efforts, opportunities and decisions on investments. Continuous monitoring of key performance indicators throughout development aims to provide early warnings and resolutions of issues to improve efficiency, reliability and speed of software development.
Extracting Knowledge from your Asset Management SystemJohn Reeve
Data enables information. And information enables knowledge. If the CMMS is setup correctly you can make more informed decisions using analytical reports.
IBM World of Watson: Applying IBM Predictive Analytics to IBM Case ManagerPyramid Solutions, Inc.
This document describes how IBM's predictive analytics and cognitive computing capabilities can be applied to IBM Case Manager to create smarter case management solutions. Specifically, it discusses using IBM Watson to analyze medical records like Attending Physician Statements to provide diagnostic assessments and extract key information that can help underwriters make better decisions. An example is provided of how Watson could be used to analyze an APS for a life insurance applicant to identify relevant medical conditions and optimize the underwriting process. The goal is to make case management tools more knowledgeable and accurate over time by incorporating cognitive insights.
Data Con LA 2022 - Why Data Quality vigilance requires an End-to-End, Automat...Data Con LA
Curtis ODell, Global Director Data Integrity at Tricentis
Join me to learn about a new end-to-end data testing approach designed for modern data pipelines that fills dangerous gaps left by traditional data management tools—one designed to handle structured and unstructured data from any source. You'll hear how you can use unique automation technology to reach up to 90 percent test coverage rates and deliver trustworthy analytical and operational data at scale. Several real world use cases from major banks/finance, insurance, health analytics, and Snowflake examples will be presented.
Key Learning Objective
1. Data journeys are complex and you have to ensure integrity of the data end to end across this journey from source to end reporting for compliance
2. Data Management tools do not test data, they profile and monitor at best, and leave serious gaps in your data testing coverage
3. Automation with integration to DevOps and DataOps' CI/CD processes are key to solving this.
4. How this approach has impact in your vertical
This document discusses application resiliency planning and data center operations. It notes that application resiliency used to be simpler when enterprises had single architectures, but it is now more complex with applications distributed across multiple locations including colocation, edge sites, and the cloud. It also discusses common causes of outages including power, network issues, and facility problems. Finally, it recommends taking a comprehensive approach to data center facility management including automation, operations, quality management, availability, capacity planning, and governance.
Adaptive Six Sigma Case Study - MACH Teledata - 01LN Mishra CBAP
MACH Teledata is the world's largest telecom roaming settlement organization, and this case study examines how they reduced delinquency rates in processing large volumes of call record files by prioritizing tasks based on file value and age rather than just first-in-first-out. Analysis found that 75% of processing time was waiting, so the new approach reduced response times such that 94% of files were now treated within 1.5 days compared to just 62% previously. This lowered potential monetary losses for MACH by prioritizing higher value files that accounted for 80% of potential penalties.
The document discusses different types of data and analytics. It describes structured, semi-structured, and unstructured data. It also outlines categories of analytics including descriptive, predictive, discovery, and prescriptive analytics. Finally, it provides examples of applying prescriptive analytics to form hypotheses based on events and recommend actions.
One of the most powerful ways to apply advanced analytics is by putting them to work in operational systems. Using analytics to improve the way every transaction, every customer, every website visitor is handled is tremendously effective. The multiplicative effect means that even small analytic improvements add up to real business benefit.
This is the slide deck from the Webinar. James Taylor, CEO of Decision Management Solutions, and Dean Abbott of Abbott Analytics discuss 10 best practices to make sure you can effectively build and deploy analytic models into you operational systems. webinar recording available here: https://decisionmanagement.omnovia.com/archives/70931
OberservePoint - The Digital Data Quality PlaybookObservePoint
There is a big difference between having data and having correct data. But collecting correct, compliant digital data is a journey, not a destination. Here are ten steps to get you to data quality nirvana.
Corrective Action – The Heart of Continuous QualityIBS America
In the latest managing continuous improvement webinar from IBS America, Inc., attendees learned how to use Corrective Actions to improve quality, enhance interdepartmental communications, and increase customer satisfaction.
In this webinar, we covered:
-Why Corrective Actions are important
-How to get the entire enterprise involved
-What tools you can use to manage the process
Visit the IBS Blog for more information and to register for our next webinar:
http://info.ibs-us.com
Predictive analytics are increasingly a must-have competitive tool. A well-defined workflow and effective decision modeling approach ensures that the right predictive analytic models get built and deployed.
James Tomaney - Automated Testing for the ATM Channel TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Automated Testing for the ATM Channel by James Tomaney. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Value of solar remote monitoring and analytics for operational intelligenceMachinePulse
MachinePulse attended India Solar Week 2016 from June 2-3 in New Delhi. Our keynote speaker talked about how remote monitoring and analytics helps in increasing the operational efficiency of a solar plant. The presentation includes information about the company, our products and case studies.
Web Performance Analysis - TCF Pro 2009Guy Ferraiolo
This document surveys web performance analysis. It discusses goals like understanding performance results and saving time and money. Key concepts discussed include reducing uncertainty through iterative testing and analysis. The document provides advice on defining processes, using tools appropriately, and designing tests to gain the most insight. It emphasizes that performance analysis requires a serious, long-term investment but can provide meaningful results.
This document outlines the different departments and functions within a company's customer service operations. It includes a front office for direct customer interactions like a call center, monitoring, and trouble ticket handling. The back office supports the front office with activities such as periodic maintenance, service monitoring, log management, change management, test protocols, installation and delivery, disaster management, and training. Engineers in the front office help customers with issues like faults and problems.
The State of Unifying Clinical Systems, Processes, and Stakeholder CollaborationVeeva Systems
Explore new findings from the Veeva 2019 Unified Clinical Operations Survey, including drivers and barriers to unifying clinical systems and processes. Get the full report here: http://bit.ly/31wpWZ0
1) The document is a summary of a webcast about using AgVeritas, a web-based yield analysis tool, to determine optimal input rates and test new products in different management zones within a field based on spatial analysis of yield data.
2) AgVeritas accounts for multiple factors that impact yield and identifies spatial effects not explained by known factors to provide more accurate analysis results.
3) An example analysis showed that seeding rates of 32K, 32K, and 35K were optimal in different yield zones within a field based on a seeding rate trial, and that compost provided benefits in some zones.
The document discusses the Contributing and Causal (C&E) Matrix, which is used to identify contributing and causal factors of adverse events. The C&E Matrix involves defining the effect, identifying major categories, generating and evaluating ideas for causes, voting on the most likely causes, and recommending solutions. It provides a visual means to trace a problem to its causes and gain different perspectives from sharing ideas with others. The process outputs are rated based on importance, then input variables are rated on their relationship to the outputs to start a Failure Modes and Effects Analysis (FMEA) process.
This document outlines an 11-step process for quantifying the return on investment (ROI) of library technology projects. The process involves: 1) conducting an environmental scan of potential projects; 2) outlining program assumptions and objectives; 3) identifying impact points of current and new processes; 4) establishing a conversion standard; 5) collecting data on impact points; 6) estimating changes and project savings; 7) identifying project costs; 8) considering qualitative benefits; 9) calculating ROI; 10) writing a report; and 11) following up after implementation. The goal is to reduce risk, allocate resources effectively, and validate technology investment claims using both quantitative and qualitative metrics.
This document discusses achieving world-class software development through a lean management approach with low oversight that allows development teams flexibility in their processes and technologies. It promotes focusing investments only on critical issues, providing transparent reporting and assessments of risks, efforts, opportunities and decisions on investments. Continuous monitoring of key performance indicators throughout development aims to provide early warnings and resolutions of issues to improve efficiency, reliability and speed of software development.
Extracting Knowledge from your Asset Management SystemJohn Reeve
Data enables information. And information enables knowledge. If the CMMS is setup correctly you can make more informed decisions using analytical reports.
IBM World of Watson: Applying IBM Predictive Analytics to IBM Case ManagerPyramid Solutions, Inc.
This document describes how IBM's predictive analytics and cognitive computing capabilities can be applied to IBM Case Manager to create smarter case management solutions. Specifically, it discusses using IBM Watson to analyze medical records like Attending Physician Statements to provide diagnostic assessments and extract key information that can help underwriters make better decisions. An example is provided of how Watson could be used to analyze an APS for a life insurance applicant to identify relevant medical conditions and optimize the underwriting process. The goal is to make case management tools more knowledgeable and accurate over time by incorporating cognitive insights.
Data Con LA 2022 - Why Data Quality vigilance requires an End-to-End, Automat...Data Con LA
Curtis ODell, Global Director Data Integrity at Tricentis
Join me to learn about a new end-to-end data testing approach designed for modern data pipelines that fills dangerous gaps left by traditional data management tools—one designed to handle structured and unstructured data from any source. You'll hear how you can use unique automation technology to reach up to 90 percent test coverage rates and deliver trustworthy analytical and operational data at scale. Several real world use cases from major banks/finance, insurance, health analytics, and Snowflake examples will be presented.
Key Learning Objective
1. Data journeys are complex and you have to ensure integrity of the data end to end across this journey from source to end reporting for compliance
2. Data Management tools do not test data, they profile and monitor at best, and leave serious gaps in your data testing coverage
3. Automation with integration to DevOps and DataOps' CI/CD processes are key to solving this.
4. How this approach has impact in your vertical
This document discusses application resiliency planning and data center operations. It notes that application resiliency used to be simpler when enterprises had single architectures, but it is now more complex with applications distributed across multiple locations including colocation, edge sites, and the cloud. It also discusses common causes of outages including power, network issues, and facility problems. Finally, it recommends taking a comprehensive approach to data center facility management including automation, operations, quality management, availability, capacity planning, and governance.
Layered Process Audits - The Compelling and Immediate ROIEase Inc.
Automotive and Aerospace manufacturing suppliers practice layered process audits to maintain quality control in increasingly demanding and challenging production environments. Learn the immediate ROI associated with this practice.
Leveraging Automated Data Validation to Reduce Software Development Timeline...Cognizant
Our enterprise solution for automating data validation - called dataTestPro - facilitates quality assurance (QA) by managing heterogeneous data testing, improving test scheduling, increasing data testing speed and reducing data -validation errors drastically.
Building a Robust Big Data QA Ecosystem to Mitigate Data Integrity ChallengesCognizant
With big data growing exponentially, the need to test semi-structured and unstructured data has risen; we offer several strategies for big data quality assurance (QA), taking into account data security, scalability and performance issues. Our recommendations center around data warehouse testing, performance testing and test data management.
Akili provides data integration and management services for oil and gas companies. They leverage over 25 years of experience and experts in SAP, BI platforms, financial systems, and oil and gas data. Akili helps customers address challenges around data quality, reliability, disparate systems and gaining a single view of data. They provide predefined solutions and accelerators using industry standards from PPDM (Professional Petroleum Data Management). Akili's approach involves assessing an organization's data maturity, developing a data integration strategy, addressing governance, master data and tools to integrate data from multiple sources and systems into meaningful business information.
Automate data warehouse etl testing and migration testing the agile wayTorana, Inc.
Data Warehouse, ETL & Migration projects are exposed to huge financial risks due to lack of QA automation. At iCEDQ, we suggest the agile rules based testing approach for all data integration projects.
Presentation given to the BCS Data Management Specialist Group on 10th April 2018.
Data quality “tags” are a means of informing decision makers about the quality of the data they use within information systems. Unfortunately, these tags have not been successfully adopted because of the expense of maintaining them. This presentation will demonstrate an alternative approach that achieves improved decision making without the costly overheads.
Joseph Ours - The Scourge Of Testing: Test Data ManagementQA or the Highway
The document discusses test data management (TDM). It describes TDM as a lifecycle including analysis, design, creation, use and maintenance, and disuse phases. Key roles to support TDM include test leads, database administrators, TDM architects and managers. When implemented effectively, TDM can improve defect quality, facilitate test coverage, reduce labor costs, and manage storage and security risks.
Using ML to Protect Customer Privacy by fmr Amazon Sr PMProduct School
Main Takeaways:
- Understand the importance of proactively thinking about customer privacy and why ML-based solutions are ideal to tackle that problem
- Bootstrapping an ML workflow and leading your ML scientists through the different steps - goal setting, data collection, data labeling, picking the right ML model, validation, and setting goal success criteria
- Avoiding common pitfalls, not getting overwhelmed with data and complexity, and managing leadership expectations
The document discusses mainframe modernization challenges and opportunities. It covers extracting business rules from legacy mainframe systems, analyzing insurance business processes, and testing an underwriting engine using AI. It also includes mind maps on digital transformation, no-code/low-code platforms, DevOps, and what could be in a technology roadmap with areas like AI, automation, natural language processing and more.
This document discusses test data management strategies and IBM's approach. It begins by explaining how test data management has become essential for software development. A key challenge is ensuring high quality test data. The document then outlines goals for a test data management strategy, such as producing reusable, consumable, and scalable results. It proposes analyzing needs, crafting data models, and establishing governance. IBM's approach involves engaging consultants, conducting a proof of concept, piloting the strategy, and full implementation using test data management tools. The overall goal is to improve testing efficiency and effectiveness.
Machine learning’s impact on utilities webinarSparkCognition
Navigant Research estimates that utility companies will spend almost $50 billion on asset management and grid monitoring technology by 2023. Today many organizations are facing budgetary challenges in order to increase reliability, uptime and safety within their facilities.
The industry is adapting to new technologies including utilization of advanced sensors and sensor fusion, edge devices, artificial intelligence, and machine learning to create the maintenance center of the future.
Bernie Cook, former Director of Maintenance and Diagnostics at Duke Energy and now VP of Woyshner Service consulting, will join us to provide practical guidance and examples of how utilities can begin adapting these next generation technologies within their facilities to drive significant reduction in maintenance costs.
Following Bernie, Stuart Gillen, Director of Business Development at SparkCognition, will give examples of how machine learning technologies are augmenting current practices that make maintenance engineers more efficient at predicting critical asset failure.
Join this webinar to learn about:
- Real examples of ways utilities are moving to more advanced monitoring and diagnostic capabilities and the technologies involved.
- How machine learning can improve equipment reliability and performance, and reduce operational and maintenance costs.
- How machine learning can augment or even supplement human subject matter experts by providing significant advance notice of asset performance issues.
Are Those End-User Hardware Upgrades Necessary Right Now? Maybe Notpanagenda
The document discusses using a data-driven approach to hardware refreshes instead of traditional calendar-based cycles. It outlines how monitoring device performance with a digital experience monitoring solution can identify which devices actually need replacing to save costs. The solution tracks metrics like CPU usage, memory consumption, and call quality to find devices that are performing well and do not yet require hardware upgrades, allowing companies to defer unnecessary refreshes for up to 50% of devices. This can save hundreds of thousands to millions per year that can be reinvested in other IT priorities. A demonstration of the solution's capabilities is provided.
Techniques for effective test data management in test automation.pptxKnoldus Inc.
Effective test data management in test automation involves strategies and practices to ensure that the right data is available at the right time for testing. This includes techniques such as data profiling, generation, masking, and documentation, all aimed at improving the accuracy and efficiency of automated testing processes.
Lessons Learned - Insights to Improve Support for MS Teams in a Hybrid Work E...panagenda
Link to recording: https://www.panagenda.com/webinars/improve-support-ms-teams-hybrid-work-environment/
Microsoft Teams now has well over 270 Million active users. If your organization is relying on Teams for day-to-day communications, including calls and meetings, then you need to attend this webinar. We have some interesting information to share from enterprise organizations that are working to optimize support for Teams in a hybrid work environment.
There are three major lessons that you can implement today to improve the Digital Experience (DX) for your Microsoft Teams deployment:
1) Standardize software versions for Microsoft Teams across your organization
2) Identify ISPs used by remote workers that have lagging network performance
3) Audit hardware performance for users and upgrade/reconfigure devices with identified issues
Please join us for this webinar to discuss real-world stories from other enterprise organizations and learn how they were able to proactively identify call quality issues and improve the DX for their Team’s deployment in their hybrid work environment. During the webinar, you will receive an OfficeExpert Digital Experience Monitoring (DEM) overview. This SaaS solution provides data analytics for Microsoft Teams performance and calls quality from the end-user perspective. If you want to know the truth about your Team’s digital experience then you need to see how data-driven, actionable insights can identify necessary endpoint/networking enhancements and speed troubleshooting for any issues.
During this webinar, you will explore topics that include:
- Proactive monitoring for the remote user digital experience
- Identifying slow ISPs causing poor call quality for users working remotely
- Real-time performance monitoring for FAST troubleshooting
- Detailed analytics that shows lagging performance on endpoint devices
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...Dell World
Becoming data-driven requires analytics to be embedded throughout the organization in different functional areas and different operational processes. But how do you provide more and more people with the ability to run any analytics on any data anywhere– without breaking the bank? In this session, you’ll see real-world examples of Dell customers who have successfully embedded analytics across processes and operations to drive innovation.We will also demonstrate how embedding analytics enables faster innovation and improves collaboration between data scientists, business analysts, and business stakeholders, leading to a competitive advantage.
The Future of Validation 2025 and BeyondSteven Mattos
The document discusses how validation processes will change between 2025 and beyond due to disruptive technologies like automation, artificial intelligence, and big data. Key changes include:
1) Automation will execute validation activities like testing and data collection with greater precision than humans. Automated systems will analyze data to monitor processes.
2) Massive amounts of data will be collected and analyzed in real-time to enable predictive process adjustments and system corrections.
3) Validations will become continuous and real-time rather than periodic, relying more on automated oversight and less on human activities. Requirements will be fulfilled rapidly through integrated development, manufacturing, and quality data.
Citrix Systems Inc. has an IT landscape consisting of diversified technologies that include SAP, SAP Ariba, Concur, and Workday solutions. Learn how Citrix put in place a test automation strategy to achieve end-to-end quality and validation of complex business processes that span multiple SAP technology solutions. Citrix HR, procurement, and travel and expenses business processes, which require integration between a hosted SAP solution and cloud-based Workday, SAP Ariba, and Concur platforms, will be explained.
A lack of trust is inhibiting the adoption of #AI. This presentation discusses approaches to delivering trusted data pipelines for AI and machine learning
Similar to Digitalization in electronics manufacturing (20)
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
9. • Multiple Data Formats -> Uniform Data
• Difficult to compare and correlate
• No automation of data collection
• Problems not discovered when they happen
• Early intervention gets impossible
• Globally Distributed Data Sources
• Networking topologies, router config, firewalls
• Limited or no data from Subcontractors
• Providing customers with access to data
• Analysis Paralysis
• Analysis addressing the wrong problems
• Do you know your True First Pass Yield?
Homegrown solutions
• Poor performance, high maintenance cost
• Non-core activities
• Diverging impact on stress levels
Challenges Using Test Data
10. Building Training Sets
60% 19%
Mining Data
for Patterns
Other
Cost of Analytics
Refining Algorithms
Cleaning and
Organizing Data
Collecting
Data Sets
Source: forbes.com
57% 21% Collecting
Data Sets
Cleaning and
Organizing Data
What data scientists spend the most time doing
Least enjoyable part of their work
11. Statistical Process Control
SPC Assumption: Elimination of common
cause variation in measurements
Highly dynamic data!
components, 3rd party vendors,
number of steps, operators, test machines,
instrumentation, fixtures, processes etc
100s of test sequences
1000s of test steps
Aidon Oy Example
Batch size 10000 units
× 357 – Total components
× 137 – Different types of components
× 37 – Component changes -> every 280th unit
× Processes, operators, instrumentation changes…
A change every 10th product, or less
12. WECO - Western Electric Rules
Traditional SPC analysis
• Proactive detection of ‘out of control’ processes
Source: Wikipedia
13. Origin of Fault? KPI Monitoring
PCB
Test
In-
Circuit
Test
Module
Test
System
Test
Deployment
Traditional methods
Selecting KPIs
14. Deployment
Profit Loss from Failures Found
1x 10x 100x 1.000x >10.000x
PCB
Test
In-
Circuit
Test
Module
Test
System
Test
10X Cost Rule
15. Limited dimensions/meta data
due to capabilities and cost
Assumptions of root cause
Only known-unknown
Limited by Data available
Analysis Paralysis by Low Level Approach
Lack of Real-Time visibility
Lack of Data Availability
Wrong Improvements
Long Cycle Time
Not Lean - Significant Waste
What?
Why?
Improvement Contstraints
16. Following Best-Practices, 5M&E
Automated Collection
Integrated Repair Data
Initiatives based on insight
Don’t actively search in data
True 1st Pass Yield
Yield, Pareto, CPK
Numerical Analysis
8D Root-Cause Module
Global Data Acquisition
Real-Time Dashboards
Trigger-Automation Alarms
High-Impact Improvement
with good cost/benefit ratio
27. Repair/RMA Module
Why?
• A failed test is a
symptom of a
problem
• Repair Data adds
important context
• Historical
knowledge base to
guide repair
operator
28. Repair/RMA Module
• A failed test is a
symptom of a
problem
• Repair Data adds
important context
• Historical
knowledge base to
guide repair
operator
Why?
29. Gage R&R
• How much of the variation comes from the measurement system?
Why?
30. Other Capabilities
Manual Inspection Sequences
Overall Equipment Effectiveness
Connection and Execution Time
Execution Time Pareto
Rolled Throughput
Unit Verification
Box Build Configuration
Software/Firmware Distribution
MAC Address Distribution
Why?Why?What?
In this video we’ll look at a practical and specific approach OEMs and contract manufacturers can take to proactively make use of test data, for the purpose of driving improved business results.
We’ll start by looking at some of the broader issues around Digitalization.
We’ll then discuss how traditional approaches to electronics-manufacturing-quality-management falls short of meeting these objectives, and how a proven high level approach to test data management looks like. We’ll also show some specific examples on relevant key performance indicators.
If we consider the manufacturing industry, the purpose of trends such as Digitalization is to enable companies to act on data. The aim is to improve their profitability by doing things better than they did before.
<click> If your company excels in responding to insight from data you can expect to experience increased profit margins in the short run.
<click> This is largely due to operational improvements you do, improvements adding to your competitive advantage.
<click> In the longer run however, more and more competitors will go through a similar journey of improvements. The market pressure will then reduce your profit margins, as more competitors are able to supply products at a competing price and cost compared to you.
<click> If we assume that the time after today is where your opportunity for improvement and profits are found
<click> we also have to be mindful of the investment needed to get you there.
<click> Equally relevant is the fact that there is a delay between this investment and the return.
<click> This initial investment of time and or money can occur at any point in the future. If it happens too late the opportunity for improved profits will have passed, and you are potentially scrambling to avoid bankruptcy.
There are several dimensions that affects unit cost and profitability. In a broader sense, the continuum where the optimum is found is determined by the quality of your products, the efficiency and the cost of your operations. If metrics associated with these are hidden, or invisible to your organization, the outlook for short term profit gain and long term competitive advantage are not good, and you will need to look at how these barriers can be removed.
At the end of the day, what you would like to be able to achieve using your insight, is to initiate improvements that has the highest possible return compared to the efforts needed to complete them.
There are always going to be improvements available, and it is critical that you are able to distinguish the high value improvements from the trivial ones.
When we consider using the data from automated manufacturing testers, there are several challenges potentially keeping you from succeeding
<click> First, the data you have is likely stored in multiple different formats, making aggregated analytics and correlation very difficult.
<click> Data collection is not fully automated
<click> and there may be IT challenges in getting the data transferred from all test stations
Getting data from your subcontractors are often restricted by IT and bureaucracy,
<click> issues your customers also would experience if attempting to get this data from you.
Another significant problem is analysis paralysis, by not being able to assess the relevance of data trends.
<click> Since problems are always going to be present, this is likely to have you focus on issues that should have been given a low priority, or that might not even be the actual problem facing you.
These challenges are often battled from home-grown data management solutions. Solutions that adds low relative value, has high internal cost, and takes focus away from your core-activities. In addition, the inability to be proactive often carries a heavy stress burden on both managers and engineers, who might even have to travel across the world on short notices to engage in fire-fighting.
An article published on forbes.com
<click>has investigated how much time data scientists spent on different activities.
<click> As shown here, about 80% of the time is spent on collecting and organizing data. The problem is that this is an activity that has absolutely no contribution to competitive advantage for your company.
<click> In addition, about 80% report that this is the least enjoyable part of their job - making it a Human Resource issue as well.
https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/#17530a7c6f63
Many Quality Assurance solutions used in the electronics manufacturing industry, both off-the-shelf and homegrown, are based on Statistical Process Control, a technique dating back to the 1920s.
<click>
One of the fundamental assumptions of this to work is that you are able to remove, or distinguish what is called common cause variations from the manufacturing process.
<click>
In a modern electronical product you have a massive amount of variables compared to the 20s. Things such as firmware, test operations, component changes, revisions and so forth.
<click>
An example we have from Aidon - designer and manufacturer of household smart-power-meters, shows that for one of their batches containing roughly 10.000 units they have a change in their process parameters
<click>
every 10th product or so. This is representative to a change in these common cause variation factors that SPC assumes are stable.
Going further, in SPC you have rules such as the Western Electric Rules for Proactive detection of issues. Put simply, it defines alarms for out-of-control processes, and originated back in the 1950s. One problem here is the dynamics of manufacturing itself, as we looked at, and your ability to understand the relevance of received alarms. Another is that on average you’ll one false alarm per 91,75 observation.
<click>
If you manufacture 10.000 products per year, each tested over 5 different processes - and each with an average of 25 different measurements per process, this will flood you with alarms, totally undermining the purpose of an alarming system.
So what you typically do is that you makes assumptions on what measurements are extra important, and monitor a limited set of Key Performance Indicators,
<click>
Very often these are well downstream in your manufacturing process, like at system level testing. The origin of a failure however could just as easily be located upstream of where the KPIs are recorded
<click>
So a poor batch of PCBs that was allowed to pass, all of the sudden means that you need to move the full system to repair and deconstruction, as opposed to just fixing the PCB.
The 10-X-cost-rule tells us that for every step in the process a defect is carried forward, the cost of fixing it increases by a factor of 10.
In this example a failure from the PCB Process identified at “System Test” would cost 100 times what it could have, had the issue been detected where it originated.
Takata Airbags is a good example from recent time, having to file for bankrupcy because faulty products was shipped out to the market.
If we map these issues onto Continuous Improvement Initiatives - we’ve illustrated it here with the Six Sigma DMAIC cycle,
<click> but it can be anything intended to drive improvements
traditional approaches contains limitations in every step when it comes to understanding
<click> what is really going on, and why this is happening.
<click> Because of the inherent limitations of traditional approaches as seen, you are forced to assume that your problems are found and originates where you actively are looking. At best these are able to inform you only of your known-unknowns, likely leaving blind-spots with serious issues unaddressed.
The limitations from this further affects the entire improvement cycle. And at the end of it, you probably don’t even have enough data, or real-time access to it, keeping you from evaluating even your weak improvements.
These constraints are what we remove with our skyWATS cloud solution and the WATS on-premise equivalent. And how our customers are able to continuously progress towards operational excellence.
<Click>
And you need the complete story to get there. Our philosophy is that actively looking for problems in your data is not sustainable, and as a rule of thumb you need to be presented the indicators of problems as they happen. And you absolutely need to be able to drill down from your relevant high level KPIs to any of the influencing categories, and compare the performance of the associated elements against each other. Such as comparing the performance of individual test stations, individual product revisions or different test sequence versions.
This approach to quality management is well proven,
<click> something our customer database is a strong testament to. Here, skyWATS and WATS are key elements in their internal value chains. Some of them reportedly even unable to understand how they could function before adopting it.
One of our customers gaining significantly from using WATS as part of a Lean Six Sigma program is Eltek, part of Delta Group. Before adopting WATS they operated with a First Pass Yield around 93%, and field failures within their customers warranty period of around 3%.
<click>
Elteks strategy moving forward became improving field failures through active management and optimization of their True First Pass Yield.
<click>
This again had the potential to impact their financial performance, both hard cash from reducing the cost of field failures, and soft cash from the necessary internal improvements.
<click>
By organizing themselves around ownership to the data they were able to increase their True First Pass Yield to 97% over a 5 year period. They were also able to reduce the field failures in warranty to less than one percentage, and estimated annual saving for year 5 to exceed one million dollars, with accumulated savings far exceeding that.
<click>
Today, Eltek operates with a First Pass Yield of around 98%, and a Fail Rate within warranty of less than 0,1%.
The skyWATS and WATS solution collects data by having clients installed where the test data is produced. This can either be on the test machine itself, or a local database.
<click> The client can be deployed to any location, making global data aggregation seamless. It supports buffering of test reports, avoiding loss of data due to network issues.
<Click>
We have both plug and play support and existing APIs that can pull data directly if you use TestStand, LabVIEW or .NET. The user can also modify their file output to a standard, well documented, XML or Text Format. Alternatively a file-converter can be made that reads your existing data formats. That way, no changes are needed to your software architecture. This option also allows you to import historical data.
<click>
The data is then in real-time moved to a Microsoft Azure hosted cloud or an on-premise server through TCP/IP
<click> that the browser-based reporting and analytics application connects to. Since it runs in the browser, no local installation is required.
<click>There is even a Rest API that you can pull data from the server with, to connect to existing Enterprise Software. Or read back data to the test stations if needed. We use this API ourselves in our mobile app, available for iOS and Android, giving you instant access to the status of manufacturing.
As mentioned, our process flow is a top-down one, and most often start with real-time dashboards, configured based on your needs. These will typically be either globally shared dashboard for specific functional units, or private ones that gives you the specific views you need.
The more detailed views, what we call the reporting section, often starts with yield statistics.
<click>You can slice these KPIs across multiple dimensions such as product, revision, factory, station and so forth, to look for categories performing worse than their compares, therefore negatively impacting the aggregated yield.
One level deeper we would typically look for the most frequent failures. This don’t have to be on one specific product, it can also be the most frequent failures at a product family, or factory for that matter. Applying the pareto principle to these, you’ll start to get the ability to weight effort vs impact, and get improvement initiatives with higher returns than before.
One of the reports also lets you see in what test run the products actually passed. In this example - using real data from an undisclosed Contract Manufacturer - we see that about 600 products passed the second test. The red flag though, is that there were actually some products that did not pass before the 21st attempt. Not necessarily something you want to ship to your customers just because you have a passed test in the end.
The Process Capability Analysis reports helps to understand how well your test limits are, compared to the process variations from the individual test measurements.
<click> The easiest way to get a good yield number is to have extremely wide limits, although this is likely not helping your quality go to where you want it to be. The values found here are also then useful in assessing how valid your yield results are and can be fed back to make sure that you have good test coverage.
<click> In this example you clearly see outliers in the data that should have been detected during test, but the limits were to wide.
Digging further down we can also do numerical analysis, like looking at correlations between different measurements, plot the measurements in different view-modes, and other more advanced analytics.
Part of continuous improvement is also how stakeholders collaborate to fix problems. skyWATS and WATS contains a root-cause feature, helping these people to collaborate and solve issues in a structured manner.
We consider capturing repair data, in the same system as test data, instrumental for a good quality management solution. One reason is this will provide valuable context data to why a test has failed. Most manufacturers captures some repair data, but it often lives only in their MES system or a stand alone application. We can either capture it directly through our web application used by the repair technicians, or we can interface the MES system to copy the data automatically from there.
One of the benefits of having repair data in WATS is that we can link the repair to a specific test report, and give you full traceability of the product history.
Our Gage R&R feature lets you further investigate how much of the variation in your measurements comes from the test system, fixture, or operator.
Other features available includes Manual Inspection Routines, analysis of Overall Equipment Efficiency, Test Connection and Execution Time, Time Pareto of individual test steps, Rolled Throughput, Process throughput and more.
There is also functionality to distribute test software and unit firmware automatically, to ensure that the latest software is always being used. Also the option to use skyWATS to make sure you keep track of the distribution of MAC addresses.
And in case you need to work with some of the data differently from what is available here you can always export it from our Export Wizard.
<click> where we have multiple options for formats.
Unless you have already done so, you can visit skyWATS.com/register to sign up for a free trial. You can then upload your own test data to see what lies hidden underneath. Or you can request that we load it with dummy data that you can play around with.
For a product demonstration, you can visit our contact page to request a web-based, or an in person demonstration.
Make sure to also check out our other videos, where we go more into technical details on specific functionality.