Design for reliability (DFR) is an industry-wide practice and a philosophy of considering reliability in an early stage of product design and development, to achieve a highly-reliable product while with sustainable cost. Physical of Failure (PoF) is recognized as a key approach of implementing DFR in a product design and development process. The author will present a case study to illustrate predicting and identifying product failure early in the design phase with the help of a quantitative PoF model based analysis tool.
Physics of Failure (also known as Reliability Physics) is a science-based approach for achieving Reliability by Design. The approach is based on research to identify and understand the processes that initiate and propagate mechanisms that ultimately results in failure. This knowledge when used in Computer Aided Engineering (CAE) durability simulations and reliability assessment can evaluate if a new design, under actual operating is susceptible to the root causes of failure such as fatigue, fracture, wear, and corrosion during the intended service life of the product.
The objective is to identify and eliminate potential failure mechanisms in order to prevent operational failures through stress-strength analysis to produce a robust design and aid in the selection of capable manufacturing practices. This is accomplished by modeling the material strength and architecture of the components and technologies a product is based upon to evaluating their ability to endure the life-cycle usage and environmental stress conditions the product is expected to encounter over its service life in the field or during durability or reliability qualification tests.
The ability to identify and quantify the specific hazard risks timeline of specifics failure risks in a new product while it is still on the drawing board (or CAD screen) enables a product team to design reliability into a product by revising the design to eliminate or mitigate failure risks. This capability results in a form of Virtual Validation and Virtual Reliability Growth during the a product’s design phase that can be implemented faster and at lower costs than the traditional Design-Build-Test-Fixed approach to Reliability Growth during a product’s development and test phase.
This webinar compares classical reliability concepts and relates them to the PoF approach as applied to Electrical/Electronic (E/E) System and technologies. This webinar is intended for E/E Product Engineers, Validation/Test Engineers, Quality, Reliability and Product Assurance Personnel, CAE Modeling Analysts, R&D Staff and their supervisor.
This presentation is an introduction into Multiple Over Stress Testing. A method to design robust and reliable products. It is a relaibility method that requires much insight in the Physics of Failure of the product in development
Accelerated life testing plans are designed under multiple objective consideration, with the resulting Pareto optimal solutions classified and reduced using neural network and data envelopement analysis, respectively.
John Day developed a proactive maintenance process in 1978 and manage maintenance and engineering at Alumax Mt. Holly and later at Alcoa Mt Holly for over 20 years. These are the slides he presented at the 1997 SMRP Conference. Great slides with great information. If you would like the slides and not PDF send me an email at rsmith@maintenancebestpractices.com. I worked for John Day back in the early 1980s which started my journey in Proactive Maintenance.
Reliability growth planning (RGP) is emerging as a promising technique to address the reliability challenges arising from the distributed manufacturing environment. Unlike RGT (reliability growth testing), RGP drives the reliability growth of new products by spanning the product’s lifecycle from design, prototyping, manufacturing, to field use. It is a lifetime commitment to the product reliability via systematic failure analysis, rigorous corrective actions, and cost-effective financial investment. RGP has shown to be very effective, particularly in new product introductions under the fast time-to-market requirement.
The RGP process will be introduced based on the three-phase product lifecycle: 1) design for reliability during early product development; 2) accelerated lifetime testing and corrective actions in pilot line stage; and 3) continuous reliability improvement following the volume shipment. Trade-offs among reliability investment, warranty cost reduction, and customer satisfactions will be investigated from the perspective of the manufacturer and the customer. Reliability growth tools such as Crow/AMSAA, Pareto graphs, failure mode run chart, FIT (failure-in-time), and FMECA will be reviewed and their roles in the GRP process will be discussed and demonstrated. Case studies drawn from electronics equipment industry will be used to demonstrate the RGP applications and justify its benefits as well.
In parallel with the RGP, efforts have been devoted to developing optimal preventative maintenance programs, either time-based or usage-based strategies. Recently, CBM (condition based maintenance) is showing a great potential to achieve just-in-time maintenance or zero-downtime equipment. RGP and maintenance strategies share a common objective, i.e. achieving high system reliability and availability. In this presentation, optimal maintenance policies will be devised in the context of system reliability growth.
Physics of Failure (also known as Reliability Physics) is a science-based approach for achieving Reliability by Design. The approach is based on research to identify and understand the processes that initiate and propagate mechanisms that ultimately results in failure. This knowledge when used in Computer Aided Engineering (CAE) durability simulations and reliability assessment can evaluate if a new design, under actual operating is susceptible to the root causes of failure such as fatigue, fracture, wear, and corrosion during the intended service life of the product.
The objective is to identify and eliminate potential failure mechanisms in order to prevent operational failures through stress-strength analysis to produce a robust design and aid in the selection of capable manufacturing practices. This is accomplished by modeling the material strength and architecture of the components and technologies a product is based upon to evaluating their ability to endure the life-cycle usage and environmental stress conditions the product is expected to encounter over its service life in the field or during durability or reliability qualification tests.
The ability to identify and quantify the specific hazard risks timeline of specifics failure risks in a new product while it is still on the drawing board (or CAD screen) enables a product team to design reliability into a product by revising the design to eliminate or mitigate failure risks. This capability results in a form of Virtual Validation and Virtual Reliability Growth during the a product’s design phase that can be implemented faster and at lower costs than the traditional Design-Build-Test-Fixed approach to Reliability Growth during a product’s development and test phase.
This webinar compares classical reliability concepts and relates them to the PoF approach as applied to Electrical/Electronic (E/E) System and technologies. This webinar is intended for E/E Product Engineers, Validation/Test Engineers, Quality, Reliability and Product Assurance Personnel, CAE Modeling Analysts, R&D Staff and their supervisor.
This presentation is an introduction into Multiple Over Stress Testing. A method to design robust and reliable products. It is a relaibility method that requires much insight in the Physics of Failure of the product in development
Accelerated life testing plans are designed under multiple objective consideration, with the resulting Pareto optimal solutions classified and reduced using neural network and data envelopement analysis, respectively.
John Day developed a proactive maintenance process in 1978 and manage maintenance and engineering at Alumax Mt. Holly and later at Alcoa Mt Holly for over 20 years. These are the slides he presented at the 1997 SMRP Conference. Great slides with great information. If you would like the slides and not PDF send me an email at rsmith@maintenancebestpractices.com. I worked for John Day back in the early 1980s which started my journey in Proactive Maintenance.
Reliability growth planning (RGP) is emerging as a promising technique to address the reliability challenges arising from the distributed manufacturing environment. Unlike RGT (reliability growth testing), RGP drives the reliability growth of new products by spanning the product’s lifecycle from design, prototyping, manufacturing, to field use. It is a lifetime commitment to the product reliability via systematic failure analysis, rigorous corrective actions, and cost-effective financial investment. RGP has shown to be very effective, particularly in new product introductions under the fast time-to-market requirement.
The RGP process will be introduced based on the three-phase product lifecycle: 1) design for reliability during early product development; 2) accelerated lifetime testing and corrective actions in pilot line stage; and 3) continuous reliability improvement following the volume shipment. Trade-offs among reliability investment, warranty cost reduction, and customer satisfactions will be investigated from the perspective of the manufacturer and the customer. Reliability growth tools such as Crow/AMSAA, Pareto graphs, failure mode run chart, FIT (failure-in-time), and FMECA will be reviewed and their roles in the GRP process will be discussed and demonstrated. Case studies drawn from electronics equipment industry will be used to demonstrate the RGP applications and justify its benefits as well.
In parallel with the RGP, efforts have been devoted to developing optimal preventative maintenance programs, either time-based or usage-based strategies. Recently, CBM (condition based maintenance) is showing a great potential to achieve just-in-time maintenance or zero-downtime equipment. RGP and maintenance strategies share a common objective, i.e. achieving high system reliability and availability. In this presentation, optimal maintenance policies will be devised in the context of system reliability growth.
This is a presentation to the top management as to why reliability is important and what is the difference between a maintenance engineer and a reliability engineer.
Accelerated Life Testing (ALT) is a lifetime prediction methodology commonly used by the industry in the past decades. This method , however, is reaching its limitations with the development of products within emerging technologies requiring long-term reliability. At TNO we work on technology development with long expected lifetimes , e.g. solar cells and LED lighting.
New methodologies are required to predict long term reliability for these type of products. Methods to predict long term reliability by extending ALT methods, like HALT (Highly Accelerated Life Testing) and MEOST (Multiple Environmental Stress Testing) will be discussed in the presentation.
A problem in application of these methods is definition of adequate stress profiles. It is our experience that to gain benefit from accelerated testing, insight in the Physic of Failure of a product is essential.
Test Plan Development using Physics of Failure: The DfR Solutions ApproachCheryl Tulkoff
oProduct test plans are critical to the success of a new product or technology
oStressful enough to identify defects
oShow correlation to a realistic environment
oPoF Knowledge can be used to develop test plans and profiles that can be correlated to the field.
oChange control processes and testing should not be overlooked (reliability engineer needs to stay involved in sustaining).
oOn-going reliability testing can be a useful (but admittedly imperfect) tool.
oPoF Modeling is an excellent tool to help tailor & optimize physical testing plans
After you have data from life testing, what do you do with it? Covering the basics, starting by showing "individual distribution identification" to ensure your data fits one of the reliability models, Showing how and when data can and should be analyzed with parametric distribution (Right censoring vs arbitrary censoring). Finally going through the Accelerated life testing function and how to interpret results.
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
You’ve heard about Weibull Analysis, and want to know what it can be used for, OR you’ve used Weibull Analysis in the past, but have forgotten some of the background and uses….
This webinar looks at giving you the background of Weibull Analysis, and its use in analyzing failure modes. Starting from basics and giving examples of its uses in answering the questions:
• How many do I test, for how long?
• Is our design system wrong?
• How many more failures will I have in the next month, year, 5 years?
Sit in and listen and ask your questions … not detailed “How to” but “When & Why to”!
Application of Shainin techniques in Manufacturing Industry- Scientific Probl...Karthikeyan Kannappan
Elimination of wastage is one of the main criteria for “All inclusive growth”. Among the seven kinds of waste in Lean Toyota Production System – wastage on rework and scrap is a vital one. To minimise this waste the root cause of the chronic issues are to be identified and the manufacturing process is to be made robust. When solving chronic issues, on many occasions, the solution is found to be modifying current product / process specifications. This is because the traditional practice of accepting conformance to specification as gateway to quality – has camouflaged the root cause. Hence use of Scientific tool like Shainin Techniques which is impervious to specification will enhance the finding of root cause – in long standing chronic issues.
Shainin Technique is a problem solving tool which is used to solve the chronic problems in simple manner.
Dorian Shainin , an American famous statistician has developed these techniques based upon more than 50 years of his experience. There is a famous saying at Motorala, wherein these techniques are used to attain “six sigma” Shainin Philosophy is that “Do not let the engineers do the Guessing,Let the parts do the Talking”.
As technology increases, so does the need for BGA (Ball Grid Array) components. Screaming Circuits is excited to offer a presentation on BGA layout. This topic will cover why to use BGA's and specific considerations to have while designing your pcb.
This is a presentation to the top management as to why reliability is important and what is the difference between a maintenance engineer and a reliability engineer.
Accelerated Life Testing (ALT) is a lifetime prediction methodology commonly used by the industry in the past decades. This method , however, is reaching its limitations with the development of products within emerging technologies requiring long-term reliability. At TNO we work on technology development with long expected lifetimes , e.g. solar cells and LED lighting.
New methodologies are required to predict long term reliability for these type of products. Methods to predict long term reliability by extending ALT methods, like HALT (Highly Accelerated Life Testing) and MEOST (Multiple Environmental Stress Testing) will be discussed in the presentation.
A problem in application of these methods is definition of adequate stress profiles. It is our experience that to gain benefit from accelerated testing, insight in the Physic of Failure of a product is essential.
Test Plan Development using Physics of Failure: The DfR Solutions ApproachCheryl Tulkoff
oProduct test plans are critical to the success of a new product or technology
oStressful enough to identify defects
oShow correlation to a realistic environment
oPoF Knowledge can be used to develop test plans and profiles that can be correlated to the field.
oChange control processes and testing should not be overlooked (reliability engineer needs to stay involved in sustaining).
oOn-going reliability testing can be a useful (but admittedly imperfect) tool.
oPoF Modeling is an excellent tool to help tailor & optimize physical testing plans
After you have data from life testing, what do you do with it? Covering the basics, starting by showing "individual distribution identification" to ensure your data fits one of the reliability models, Showing how and when data can and should be analyzed with parametric distribution (Right censoring vs arbitrary censoring). Finally going through the Accelerated life testing function and how to interpret results.
This seminar session provides an overview of major aspects of reliability engineering, including general introduction of reliability engineering (definition of reliability, function of reliability engineering, a brief history of reliability, etc.), reliability basics (metrics used in reliability, commonly-used probability distributions in reliability, bathtub curve, reliability demonstration test planning, confidence intervals, Bayesian statistics application in reliability, strength-stress interference theory, etc.), accelerated life testing (ALT) (types of ALT, Arrhenius model, inverse power law model, Eyring model, temperature-humidity model, etc.), reliability growth (reliability-based growth models, MTBF-based growth model, etc.), systems reliability & availability (reliability block diagram, non-repairable or repairable systems, reliability modeling of series systems, parallel systems, standby systems, and complex systems, load sharing reliability, reliability allocation, system availability, Monte Carlo simulation, etc.), and degradation-based reliability (introduction of degradation-based reliability, difference between traditional reliability and degradation-based reliability, etc.).
You’ve heard about Weibull Analysis, and want to know what it can be used for, OR you’ve used Weibull Analysis in the past, but have forgotten some of the background and uses….
This webinar looks at giving you the background of Weibull Analysis, and its use in analyzing failure modes. Starting from basics and giving examples of its uses in answering the questions:
• How many do I test, for how long?
• Is our design system wrong?
• How many more failures will I have in the next month, year, 5 years?
Sit in and listen and ask your questions … not detailed “How to” but “When & Why to”!
Application of Shainin techniques in Manufacturing Industry- Scientific Probl...Karthikeyan Kannappan
Elimination of wastage is one of the main criteria for “All inclusive growth”. Among the seven kinds of waste in Lean Toyota Production System – wastage on rework and scrap is a vital one. To minimise this waste the root cause of the chronic issues are to be identified and the manufacturing process is to be made robust. When solving chronic issues, on many occasions, the solution is found to be modifying current product / process specifications. This is because the traditional practice of accepting conformance to specification as gateway to quality – has camouflaged the root cause. Hence use of Scientific tool like Shainin Techniques which is impervious to specification will enhance the finding of root cause – in long standing chronic issues.
Shainin Technique is a problem solving tool which is used to solve the chronic problems in simple manner.
Dorian Shainin , an American famous statistician has developed these techniques based upon more than 50 years of his experience. There is a famous saying at Motorala, wherein these techniques are used to attain “six sigma” Shainin Philosophy is that “Do not let the engineers do the Guessing,Let the parts do the Talking”.
As technology increases, so does the need for BGA (Ball Grid Array) components. Screaming Circuits is excited to offer a presentation on BGA layout. This topic will cover why to use BGA's and specific considerations to have while designing your pcb.
Introduction to x-rays and x-ray inspection, Safety Operating X-Ray Cabinet Systems, Size and Weight of X-Ray Inspection Systems, How do we image the X-rays?, Magnification, Resolution, Field of View, X-Ray Inspection Area, Power of X-Ray Tube, X-Ray Sensor, Sample Positioning, x-ray applications, LED Packaging and Assembly, Semiconductor Failure Analysis, Component Counterfeit Detection, Electronic Component Manufacturing, PCB / PTH (barrel fill) Analysis, Smart Phone Design and Manufacturing, BGA Void and Head – in Pillow Analysis, RF Components and Systems, Automotive Parts, Non Destructive Testing and Evaluation, Parts – Presents- Placement, Plastic / Aluminum Molding, Medical Device Design and Manufacturing, Small Animal Imaging, Seed and Agricultural Imaging, Identification of defects in soldered components – excess voiding or excess solder, Quality control of medical temperature sensors. X-Ray images taken with TruView X-Ray Inspection systems.
MTBF is a common metric among practitioners and users of reliability prediction, safety assurance, and maintenance planning. However, there are a number of significant flaws and limitations with this approach. This presentation goes through those limitations and uses that information to suggest alternatives that may provide much greater insight into product performance.
گستردگی جغرافیایی کشورها از یکسو، کمبود نیروی انسانی متخصص در علوم مختلف و افزایش هزینههای کاری از سوی دیگر، منجر به عدم دسترسی سازمانها و شرکتها به همة منابع مورد نیاز شده است.
ویدئوکنفرانس یک فناوری منحصر به فرد است که برقراری ارتباط صوتی و تصویری (به صورت زنده) افراد را در مکانهای مختلف با فواصل مختلف امکانپذیر مینماید.
هزینههای سرسامآور جابجایی اساتید، متخصصین و مدیران مجموعهها برای برگزاری نشستهای گوناگون به صورت هزینههای آشکار و نیز از دستدادن بخش قابل توجهی از زمان، نیرو و بازده کاری و فکری این افراد، به عنوان هزینههای پنهان، نیاز بسیاری را برای به کارگیری از فناوریهای مدرن ارتباطی به خصوص ویدئوکنفرانس ایجاد کرده است.
در کنار امکانات ارتباطی ویدئوکنفرانس، با بهرهگیری از این سیستم میتوانید در یک زمان واحد در چندین مکان حضور داشته باشید. امکانی که تنها با استفاده از این تکنولوژی میسر خواهد بود.
امروزه ویدئو دیتا پروژکتور در موقعیتهای مختلفی ، به کمک کاربران شتافته و با بالا بردن بهره وری آموزش ، جلسات و سمینارها نقش بسزایی در ارتقاء کیفی اینگونه گردهمایی ها داشته است .ذیلا به پاره ای از مصارف دیتاپروژکتور اشاره می شود :
۱-کلاس های درس از دبستان تا دانشگاه (همیشه یک تصویر گویاتر و موثرتر از هزاران واژه و کلمه میباشد . بدیهی است که آموزش برپایه تصویر می تواند حتی در مقاطع پایین نظام آموزشی بسیار موثر واقع شود)
۲-آموزشگاههای خصوصی و نیمه خصوصی
۳- اتاق جلسات و کنفرانس مدیران (که در آن انواع جلسات دمو و پرزنت انجام می گیرد)
۴- نمایشگاهها و شوروم های شرکتهای خصوصی و صنعتی (به جهت پخش فایلهای تبلیغاتی در ابعاد بزرگ)
۵-بکارگیری از دیتا پروژکتور در سالنهای همایش و آمفی تاتر
۶- سینما ها
۷-مدیران و کارشناسان شرکت های مهندسی مشاور
۸-استفاده از ویدیو دیتا پروژکتور در سینمای خانگی
On Duty Cycle Concept in Reliability - Definitions, Pitfalls, and Clarifications
By Frank Sun, Ph.D.
Product Reliability Engineering
HGST, a Western Digital company
For ASQ Reliability Division Webinar
August 14, 2014
DISCUS DFM focuses on characteristic management at an earlier stage in the product lifecycle when a manufacturing engineer is analyzing the detailed design of the part. In fact, by helping to define the applicable specs and annotations to include on the design, DISCUS DFM can actually assist with the definition of the Technical Data Package (TDP).
DISCUS DFM picks up where today’s leading CAD tools leave off by empowering the product team to consider the key considerations for manufacturing the part. An overview of the flow:
You start DISCUS by opening the native 3D CAD model in the model/drawing panel.
DISCUS will automatically review the model and its associated PMI and add the balloons to the model and the rows in the Bill of Characteristics.
You select the appropriate part family and likely list of manufacturing processes to consider for fabricating the part.
At this point, DISCUS DFM enables you to evaluate the part DFM by applying rules associated with the part’s features and characteristics versus the likely manufacturing processes.
The evaluation of the part against the integrated manufacturing knowledgebase results in a list of pertinent DFM constraints, recommended annotations/PMI for the part, and more.
When you're completed the analysis of the model, you can export the DFM data for review with the DFM engineer or the entire Integrated Product Team.
With DISCUS DFM, you consistently and correctly add the vital details to the design, giving you the ability to manufacture the new part right the first time. DISCUS DFM is the tool to improve the quality and productivity of your engineers.
This PPT is a preview on my recent DFM Handbook-“ Taoist Directions for Design & Development “- targeted to Design Engineering Professionals ,Industries and Institutions .I am offering FREE on-line Consultancy on my ‘Tao of DFM’ .For on-line consultancy as well as detailed implementations please email to erramalingam.ks@gmail.com
Please visit www.dfmablog.com and www.dfmhandbook.com
Er Ramalingam DFM & Innovation Consultant
Chennai -90 INDIA
This is a three parts lecture series. The parts will cover the basics and fundamentals of reliability engineering. Part 1 begins with introduction of reliability definition and other reliability characteristics and measurements. It will be followed by reliability calculation, estimation of failure rates and understanding of the implications of failure rates on system maintenance and replacements in Part 2. Then Part 3 will cover the most important and practical failure time distributions and how to obtain the parameters of the distributions and interpretations of these parameters. Hands-on computations of the failure rates and the estimation of the failure time distribution parameters will be conducted using standard Microsoft Excel.
Part 2. Reliability Calculations
1.Use of failure data
2.Density functions
3.Reliability function
4.Hazard and failure rates
Statistical Process Control for SMT Electronic ManufacturingBill Cardoso
Statistical Process Control (SPC) is a statistical method to control and monitor the quality of a production line. In this presentation we cover the detailed development of a SPC program, from selecting the appropriate metrics for a manufacturing process to collecting data to analysing the data. Examples are used to show the power of SPC in diagnosing quality problems with SMT manufacturing lines. The early detection of problems is critical to the success of any manufacturing line.
LED, BGA, and QFN assembly and inspection case studiesBill Cardoso
In this tutorial we cover the manufacturing of the most challenging surface mount parts to assemble and inspect today: LEDs, BGAs, and QFNs. The tutorial focuses on the pitfalls of manufacturing and inspecting PCBs with these devices. Presentations will provide content to solve many of the technical challenges encountered by luminaire integrators and contract manufacturers. This tutorial is targeted at manufacturing, process, and quality personnel responsible for designing, implementing and/or controlling the surface mount device application and inspection process. Those personnel responsible for training operators and technicians to perform assembly inspection or control the manufacturing process would also benefit from this tutorial.
We will use a library of assemblies inspected at Creative Electron’s Advanced Solutions Lab to provide attendees with real life examples of assembly issues. Attendees are welcome to send their own assemblies to Creative Electron prior to the webtorial so that the material can be used during training.
Topics Covered:
How LED material handling and storage impact assembly performance
LED x-ray inspection: How voids cost you money
Case study: How lack of quality killed a successful LED company
Process design for BGA and QFN assembly and rework
BGA and QFN x-ray inspection: How to see what often goes wrong
X-Ray as a tool for quality process design and control
- All x-ray images taken with TruView X-Ray Inspection systems.
If you want to discover answers for the most often asked questions as below, glance through this presentation -
Questions often asked -
Do we get timely build with Quality?
Do we know/have capability matrix of the team?
Do we have resource/head count utilization charts?
Are we sure if features are validated on time?
Do we know if engineers understand what customers are expecting?
Do we have right channel of prioritization?
Do we have right change management control in place?
Do we know if we have tested enough?
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2020/11/smarter-manufacturing-with-intels-deep-learning-based-machine-vision-a-presentation-from-intel/
For more information about edge AI and computer vision, please visit:
https://www.edge-ai-vision.com
Tara K. Thimmanaik, Solutions Architect at Intel, presents the “Smarter Manufacturing with Intel’s Deep Learning-Based Machine Vision” tutorial at the September 2020 Embedded Vision Summit.
As demand for smarter and more efficient manufacturing is growing, IoT technologies—including sensors, edge devices, gateways, servers and the cloud—are being used throughout the factory to compute deep learning analytics workloads at the appropriate location. Efficient data-driven manufacturing can help to reduce labor costs, increase quality and maximize profit. The biggest hindrance to achieving these outcomes is the difficulty in extracting data from vendor-locked and proprietary systems for analytics downstream.
In this presentation, Thimmanaik covers Intel’s approach to developing open, flexible and scalable solutions, including:
• Intel’s technologies such as OpenVINO, Movidius Vision Processor Units, Edge Insights Software (EIS) and deep learning algorithms
• How Intel’s offerings come together in the industrial marketplace with partnerships forged to address the constraints of manufacturing infrastructure
• Real-world examples highlighting defect detection in textile printing (where 90% accuracy at 50 fps was achieved) and smartphone screen production (where false negatives were only 0.6%)
This slide deck Introduces Chef and its role in DevOps. The agenda of the deck is as follows:
- A Review of DevOps
- BMs Continuous Delivery solution
- Introduction to Chef
- Chef and Continuous Delivery
Read more on DevOps: http://sdarchitect.wordpress.com/understanding-devops/
This is the presentation that I presented with Ruth Willenborg that provides a review of IBM's DevOps strategy as well as the roadmap for recently developed capabilities and future directions.
Run Your Oracle BI QA Cycles More EffectivelyKPI Partners
How does one QA an OBI system? Many project teams struggle to plan out the steps and types of tests they will need to efficiently drive an efficient QA cycle. Learn about the different facets of your BI system and how to properly QA each layer. Special attention will be paid to Data testing and OBI Ad-hoc testing.
Speaker: Jeff McQuigg, Solutions Architect, KPI Partners
Delivered at BIWA Summit 2013
This session focuses on IPv6 deployment options for the enterprise and commercial network manager, with in-depth information about IPv6 configuration and transition methods. IPv6 deployment considerations for specific areas of the network such as campus, WAN or branch, remote access, and data center are discussed. The session features best practices for deploying IPv6 with a variety of associated technologies and operating systems.
How CapitalOne Transformed DevTest or Continuous Delivery - AppSphere16AppDynamics
Making the leap to continuous delivery is precarious for any organization, but the concerns are greatly exacerbated when your organization services approximately 45 million bank accounts. Committed to maintaining flawless user experiences while accelerating release cadence, Capital One faced a daunting challenge as it transformed culture, processes, and technical infrastructure in its evolution to continuous delivery.
Join this session with Capital One's Michael Bonamassa and Parasoft's Wayne Airole and learn from their insights on what DevTest changes are critical for responding to extreme digital disruption.
Key takeaways:
o The changing responsibilities of DevTest in a "continuous everything" world
o What skill sets software testers need to ride the wave of digital transformation
o How service virtualization and continuous testing measure the risk of a release candidate
o How to evolve the culture and process to support continuous delivery
o What technical infrastructure is required for real-time test automation and continuous delivery maturation
For more information, go to: www.appdynamics.com
DBD 2414 - Iterative Web-Based Designer for Software Defined Environments (In...Michael Elder
Delivered at IBM Innovate 2014. Original abstract:
How can you improve your customer feedback loop using iterative, full stack application design for the cloud?
In this presentation, we’ll cover an innovative new way of designing and versioning your cloud applications through a web-based environment development toolkit. With support for OpenStack and other cloud providers, we’re able to capture all aspects of your cloud-based application from compute, storage, and virtual networking all the way up to the application managed in UrbanCode Deploy. In a single click, you can stand up a new environment complete with application components deployed and ready to run. With built in configuration management, you can see the changes made by your automation to configure each node. And with UrbanCode Deploy’s inventory management system, you’ll always know what version of which component is deployed where.
Come learn about our new take on cloud design and get involved to provide us with feedback to make this offering exactly what you need.
Objectives
To provide an introduction to the statistical analysis of
failure time data
To discuss the impact of data censoring on data analysis
To demonstrate software tools for reliability data analysis
Organization
Reliability definition
Characteristics of reliability data
Statistical analysis of censored reliability data
Objectives
To understand Weibull distribution
To be able to use Weibull plot for failure time analysis and
diagnosis
To be able to use software to do data analysis
Organization
Distribution model
Parameter estimation
Regression analysis
With the increase in global competition, more and more costumers consider reliability as one of their primary deciding factors, when purchasing new products. Several companies have invested in developing their own Design for Reliability (DFR) processes and roadmaps in order to be able to meet those requirements and compete in today’s market. This presentation will describe the DFR roadmap and how to effectively use it to ensure the success of the reliability program by focusing on the following DFR elements.
Improved QFN Reliability Process by John Ganjei. John will talk about the improvements in the reliability process in this webinar.
It is free to attend - see www.reliabilitycalendar.org/webinars/ to register for upcoming events.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. ASQ Reliability Division
ASQ Reliability Division
English Webinar Series
English Webinar Series
One of the monthly webinars
One of the monthly webinars
on topics of interest to
reliability engineers.
To view recorded webinar (available to ASQ Reliability
Division members only) visit asq.org/reliability
) /
To sign up for the free and available to anyone live
webinars visit reliabilitycalendar.org and select English
Webinars to find links to register for upcoming events
http://reliabilitycalendar.org/The_Re
liability_Calendar/Webinars_
liability Calendar/Webinars ‐
_English/Webinars_‐_English.html
3. Design for Reliability
(DFR)
- A Case Study Using A Physics of Failure
(PoF) Reliability Modeling and Analysis Tool
RelEng Technologies, Inc. 1
4. Why Design for Reliability (DFR)
• DFR is an industry-wide practice, and a philosophy as well, of
considering reliability in an early stage of product design and
development, to achieve a highly-reliable product while with
sustainable cost.
• Physical of Failure (PoF) is recognized as a key approach of
implementing DFR in a product design and development
process.
• A quantitative PoF model based analysis tool helps
Predicting and identifying product failure early in the
design process, allowing reliability designed into the
product.
Quantifying the test process of test design to be able to
achieve specified reliability goals.
RelEng Technologies, Inc. 2
5. Background
• Matured simulation and modeling based approach
• Increased sophistication of electronic assembly
• Inefficiency in methodology implementation
• Inaccuracy of simplified models
• Incapability of reliability modeling
• Complicated multiple modeling levels
RelEng Technologies, Inc. 3
6. A Case Study to Conduct
Assembly Level Reliability
Assessment and Risk
Identification
RelEng Technologies, Inc. 4
7. Objectives
• To quantify the fatigue life of an assembly with
over 1,000 parts including over 150 BGA
packages and over 30,000 BGA interconnects;
• To identify BGA interconnects that can
potentially fail in field based on the product’s
life requirement.
RelEng Technologies, Inc. 5
8. Assessment Process
• Convert original design data into FEA model data
• Create a global FEA model and conduct
assembly-level stress analysis
• Create a component-level FEA model for each
BGA package and conduct component level local
stress analysis
• Conduct failure modeling and predict life of
interconnects
RelEng Technologies, Inc. 6
9. Assembly under Investigation
• PCB with 154 BGA packages
• 7 packages with interconnects ranging from
1,217 to 2,092
• Total I/O number of the assembly over
100,000
RelEng Technologies, Inc. 7
11. Modeling Process (Phase 1)
Note: Due to
confidential
nature of the
board, only a half
of the layout is
illustrated in the
figures.
Convert the 2D
layout into a 3D FEA
Import the PCB model in Reliability
design file and Software and then
create 2D layout export it to a
in Reliability commercial FEA
Software analyzer
RelEng Technologies, Inc. 9
12. Top Side of the Board
Converted into a 3D
FEM model
Converted and then
simplified on RelSIMTM
RelEng Technologies, Inc. 10
13. Modeling Process (Phase 2) Conduct life prediction and
failure probability assessment
Import obtained FEA analysis
results and determine local
stresses and loading conditions
1 1 Apply failure models to targeted
Nf D c parts and locations to generate a
2 PCB analysis model
RelEng Technologies, Inc. 11
14. Product Usage Environment and Life
Requirement
• Storage environment:
Temp: -40/+70C
Humidity: 0-95%RH
• Operating environment:
Temp: 0/+45C
Humidity: 5-85%RH
• Expected Lifetime: 15yr
RelEng Technologies, Inc. 12
15. Thermal Loading Conditions
Product
Power 14hrs 10hrs
ON
OFF
Time
Environmental
Temperature
45C
0C
Time
RelEng Technologies, Inc. 13
17. Estimated Board-level
Temperature
To be able to visually examine if there are any
internal hot spots, FEA results were all
perpendicularly projected down to the 2-
dimensional board surface with only the highest
temperature value being plotted, if there are
multiple results available at one planar spot.
RelEng Technologies, Inc. 15
18. Estimated Board-level
Thermal Stress
To be able to visually examine if there are
any internal stress-concentrated locations,
FEA results were all perpendicularly
projected down to the 2-dimensional board
surface with only the highest stress value
being plotted, if there are multiple results
available at one planar spot.
RelEng Technologies, Inc. 16
19. Multilevel Modeling
• Simplified FEA models include
slice model
quarter symmetry model
octant symmetry model
• Completed FEA models face challenges on
Different magnitudes of geometrical dimensions
Computer capability to analyze.
• Alternative Approach – Multilevel Modeling
Board level
Component level
The board level results as an input to the component level models
RelEng Technologies, Inc. 17
23. Thermal
Stresses before
and after
Component
Level Stresses
are Overlapped
The board thermal stress
distribution is obtained
by combining the
analytical results from
both the board level and
component level models.
RelEng Technologies, Inc. 21
24. Estimated Interconnect Life Distribution
Assume 1 cycle per day and 365 cycles per year
5,475 cycles 15 years
795 cycles 2.2 years
cycles
RelEng Technologies, Inc. 22
25. Summary
• Fatigue life of all the BGA interconnects of the
assembly under investigation was analyzed and
examined in this case study.
• Due to increased geometric dimensions, those 7
BGA packages with the number of I/Os over 1,000
were the focus of the examination.
• The results indicate that the solder interconnects
on multiple BGA packages are not able to meet the
15-year life requirement of the product.
RelEng Technologies, Inc. 23
26. Contact Us
RelEng Technologies, Inc.
12202 Braxfield Ct.
Rockville, MD 20852
Phone: 410-705-1830
Dr. Haiyu Qi
haiyuqi@relengtech.com
Dr. Jingsong Xie
jingsong.xie@relengtech.com
RelEng Technologies, Inc. 24
28. Thermal Fatigue Model -
Engelmaier-Wild Model
1
1
N f D c
2
360
c 0.442 6 104 TSJ 1.74 102 ln 1
tD
Nf is the number of cycles to failure;
D is the potential cyclic fatigue damage at complete stress relaxation;
tD is half-cycle dwell time in minutes;
TSJ is mean cyclic solder joint temperature;
c is fatigue ductility exponent
RelEng Technologies, Inc. 26
29. What is RelSIMTM
RelSIMTM is a physics-of-failure (PoF) model based reliability
assessment software platform for design and development of
electronic products and systems.
The platform primarily carries a knowledge-based expert system and
a failure model based quantitative analysis tool. It is built upon an
internet based data sharing and communication mechanism, which
brings together users and software technical support people as well
as behind-the-scene reliability engineering personnel, all on the same
platform across the internet, while the users can still rely on their
own local computing resources on analysis.
It includes 5 quantitative analysis modules:
• pofANTM: component level modeling and analysis
• pofPWATM: assembly level modeling and analysis
• pofESATM: environmental stress analysis
• pofSYSTM: system/equipment level modeling and analysis
• pofPHMTM: real-time analysis and prediction
RelEng Technologies, Inc. 27
30. What makes RelSIMTM different from
other Products in the Market
Here below summarizes some key capabilities and features that
differentiates RelSIMTM from other products in the market:
• Integrated local computing and internet remote data access for
model constants, material properties, commonly used loading
conditions and design parameters;
• Availability of remote technical support on both software and
reliability data and models needed for analysis;
• Integration of a reliability expert system and PoF reliability
modeling and analysis capability;
• Interface to Cadence®, ANSYS®, AutoCAD® and other
commercial Electronic Design Automation (EDA), Finite Element
Analysis (FEA), and Computer Aided Design (CAD) data files;
• A dynamically updated pool of models including customer
specified empirical models
RelEng Technologies, Inc. 28
31. Why choose RelSIMTM in DFR Implementation
• RelSIMTM has interface to Cadence®, Protel/Altium® and planned to
Mentor Graphics® data file format, allowing its access to design data
that are generated from commercial EDA tools and achieve the
automation of a modeling process;
• RelSIMTM utilizes commercial tools, instead of built-in codes, to
conduct FEA analysis, making it more compatible to customers’
existing engineering design environment and eliminating cost of
duplication;
• RelSIMTM is a specialized assembly-level life assessment modeling
and analysis tool, with dynamically updating capability of failure
models, including customer specified empirical models.
• RelSIMTM conforms to the Service Oriented Architecture (SOA)
design principles, allowing remote technical support on both
reliability data and models.
• RelSIMTM integrates a reliability expert system and a knowledge base
to assist modeling and analysis.
RelEng Technologies, Inc. 29