This document discusses advanced techniques for forecasting part obsolescence to support strategic management of diminishing manufacturing sources and material shortages (DMSMS) problems. It describes existing ordinal scale and data mining approaches to obsolescence forecasting and their limitations. The objective is to develop a new data mining approach that mines part databases to identify trends and forecast obsolescence dates, even for parts without clear parametric drivers. Examples are provided of forecasts for linear regulators and other part types.
Focus on Your High Risk Parts. BOM Grading is an algorithm used to evaluate the BOM based on the individual part’s data. as lifecycle, multi-sourcing, inventory and environmental risk.
A Data Management Maturity Model Case StudyDATAVERSITY
How Ally Financial Achieved Regulatory Compliance with the Data Management Maturity (DMM) Model
Ally Financial Inc., previously known as GMAC Inc., is a bank holding company headquartered in Detroit, Michigan. Ally has more than 15 million customers worldwide, serving over 16,000 auto dealers in the US. In 2009 Ally Bank was launched – at present it has over 784,000 customers, a satisfaction score of over 90%, and has been named the “Best Online Bank” by Money magazine for the last four years.
Ally was an early adopter of the DMM, conducting a broad-based evaluation of its data management practices, and creating a strategy and sequence plan for improvements based on the results. Ally’s implementation of an integrated, organization-wide data management program including data governance, a robust data quality program, and managed data standards, resulted in a “Satisfactory” rating on its latest regulatory audit.
In this webinar, you will learn:
How Ally employed the DMM to evaluate its data management practices
Who was involved / lessons learned
How Ally prioritized and sequenced data management improvement initiatives
How the data management program has been enhanced and expanded
Business impacts and benefits realized
Major initiatives completed and underway
How Ally is leveraging DMM 1.0 to proactively prepare for BCBS 239 compliance.
SiliconExpert's new REACH infographic takes a look at REACH’s compliance and growth over the past year, the percentage of components and manufacturers who publish REACH information, top product lines containing SVHCs, the growth of SVHCs in this past year and more. Also, new in our 2014 REACH infographic are Authorized substances. All data current as of August 2014
1. Discuss the structured system analysis and design methodologies
2. What is DSS? Discuss the components and capabilities of DSS.
3. Narrate the stages of SDLC
4. Define OOP. What are the applications of it?
Reverse Engineering
Definition
It is described in Wikipedia as:
… the process of extracting knowledge or design information from anything man-made. The process often involves disassembling something (a mechanical device, electronic component, computer program, or biological, chemical, or organic matter) and analyzing its components and workings in detail.
Reverse Engineering
Definition
A process of discovering the technological principles of a human made device, object or system through analysis of its structure, function and operation
Systematic evaluation of a product with the purpose of replication.
Design of a new part
Copy of an existing part
Recovery of a damaged or broken part
An important step in the product development cycle.
Focus on Your High Risk Parts. BOM Grading is an algorithm used to evaluate the BOM based on the individual part’s data. as lifecycle, multi-sourcing, inventory and environmental risk.
A Data Management Maturity Model Case StudyDATAVERSITY
How Ally Financial Achieved Regulatory Compliance with the Data Management Maturity (DMM) Model
Ally Financial Inc., previously known as GMAC Inc., is a bank holding company headquartered in Detroit, Michigan. Ally has more than 15 million customers worldwide, serving over 16,000 auto dealers in the US. In 2009 Ally Bank was launched – at present it has over 784,000 customers, a satisfaction score of over 90%, and has been named the “Best Online Bank” by Money magazine for the last four years.
Ally was an early adopter of the DMM, conducting a broad-based evaluation of its data management practices, and creating a strategy and sequence plan for improvements based on the results. Ally’s implementation of an integrated, organization-wide data management program including data governance, a robust data quality program, and managed data standards, resulted in a “Satisfactory” rating on its latest regulatory audit.
In this webinar, you will learn:
How Ally employed the DMM to evaluate its data management practices
Who was involved / lessons learned
How Ally prioritized and sequenced data management improvement initiatives
How the data management program has been enhanced and expanded
Business impacts and benefits realized
Major initiatives completed and underway
How Ally is leveraging DMM 1.0 to proactively prepare for BCBS 239 compliance.
SiliconExpert's new REACH infographic takes a look at REACH’s compliance and growth over the past year, the percentage of components and manufacturers who publish REACH information, top product lines containing SVHCs, the growth of SVHCs in this past year and more. Also, new in our 2014 REACH infographic are Authorized substances. All data current as of August 2014
1. Discuss the structured system analysis and design methodologies
2. What is DSS? Discuss the components and capabilities of DSS.
3. Narrate the stages of SDLC
4. Define OOP. What are the applications of it?
Reverse Engineering
Definition
It is described in Wikipedia as:
… the process of extracting knowledge or design information from anything man-made. The process often involves disassembling something (a mechanical device, electronic component, computer program, or biological, chemical, or organic matter) and analyzing its components and workings in detail.
Reverse Engineering
Definition
A process of discovering the technological principles of a human made device, object or system through analysis of its structure, function and operation
Systematic evaluation of a product with the purpose of replication.
Design of a new part
Copy of an existing part
Recovery of a damaged or broken part
An important step in the product development cycle.
This presentation is intended to give the viewer a working knowledge of the practical applications of SAS in terms of Banking Analytics. Specifically, Enterprise Guide and Enterprise Miner have been discussed in detail.
Doing Analytics Right - Building the Analytics EnvironmentTasktop
Implementing analytics for development processes is challenging. As in discussed in the previous webinars, the right analytics are determined by the goals of the organization, not by the available data. So implementing your analytics solutions will require an efficient analytics and data architecture, including the ability to combine and stage data from heterogeneous sources. An architecture that excludes the ability to gain access to the necessary data will create a barrier to deploying your newly designed analytics program, and will force you back into the “light is brighter here” anti-pattern.
This webinar will describe the technical considerations of implementing the data architecture for your analytics program, and explain how Tasktop can help.
Use Cases are a formal technique taught in most IS/IT disciplines. This presentation discusses a model to take that methodology and apply it to developing Security Operations and SIEM focused uses cases. The template discussed is in use at a major SIEM provider today, and is based on 10 years of implementing SIEM and building up SecOps across 15+ organizations over 10 years.
Time Series Anomaly Detection with .net and AzureMarco Parenzan
If you have any device or source that generates values over time (also a log from a service), you want to determine if in a time frame, the time serie is correct or you can detect some anomalies. What can you do as a developer (not a Data Scientist) with .NET o Azure? Let's see how in this session.
This talk includes the following items:
1) discussion of the various stages of ML application life cycle - problem formulation, data definitions, modeling, production system design & implementation, testing, deployment & maintenance, online evaluation & evolution.
2) getting the ML problem formulation right
3) key tenets for different stages of application cycle.
Audio for the talk:
https://youtu.be/oBR8flk2TjQ?t=19207
MineExcellence solutions help in designing, optimizing and analyzing how blasts are performing in an integrated manner. We also have a very innovative mobile app - Smart Blasting app available in Android and iPhone. We have products for some other areas in mining such as Drilling, MineSafety App and Operational Analytics.
1. Blast Designer
2. Blast Data Collection and Management (BIMS)
3. Mobile app for Blasting (Smart Blasting)
4. Blasting Predictors – Air and Ground Vibration, Fragmentation and Fly-rock. Pattern Simulation /Analysis
5. Blast Designer and BIMSu for underground blasting
6. Drilling Platform(Drill Log, Plod Reports and Daily activity)
7. Mine Safety APP
8. Operational Analytics : (Combines drilling, blasting, loading, hauling etc)
9. Web and Mobile Custom Forms for all aspects of mining Lifecycle
10. Drone Platform for Mining Operations
Mistakes we make_and_howto_avoid_them_v0.12Trevor Warren
This presentation was put together for the CMGA (www.cmga.org.au) meetup in Canberra (ACT), Australia. It's an attempt to share some of my experiences building and delivering systems over the last decade and a half.
The Zachman Framework can be seen as a classification scheme, whereas the newer Cynefin Framework is seen as a “sense-making” framework. Adriaan Vorster's presentation looks at the features, similarities and examples of the uses for the two frameworks.
Read more about The Open Group SA at http://www.opengroup.co.za/
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Caserta
Caserta Concepts Founder and President, Joe Caserta, gave this presentation at Strata + Hadoop World 2016 in New York, NY. His session covers path-to-purchase analytics using a data lake and spark.
For more information, visit http://casertaconcepts.com/
This presentation is intended to give the viewer a working knowledge of the practical applications of SAS in terms of Banking Analytics. Specifically, Enterprise Guide and Enterprise Miner have been discussed in detail.
Doing Analytics Right - Building the Analytics EnvironmentTasktop
Implementing analytics for development processes is challenging. As in discussed in the previous webinars, the right analytics are determined by the goals of the organization, not by the available data. So implementing your analytics solutions will require an efficient analytics and data architecture, including the ability to combine and stage data from heterogeneous sources. An architecture that excludes the ability to gain access to the necessary data will create a barrier to deploying your newly designed analytics program, and will force you back into the “light is brighter here” anti-pattern.
This webinar will describe the technical considerations of implementing the data architecture for your analytics program, and explain how Tasktop can help.
Use Cases are a formal technique taught in most IS/IT disciplines. This presentation discusses a model to take that methodology and apply it to developing Security Operations and SIEM focused uses cases. The template discussed is in use at a major SIEM provider today, and is based on 10 years of implementing SIEM and building up SecOps across 15+ organizations over 10 years.
Time Series Anomaly Detection with .net and AzureMarco Parenzan
If you have any device or source that generates values over time (also a log from a service), you want to determine if in a time frame, the time serie is correct or you can detect some anomalies. What can you do as a developer (not a Data Scientist) with .NET o Azure? Let's see how in this session.
This talk includes the following items:
1) discussion of the various stages of ML application life cycle - problem formulation, data definitions, modeling, production system design & implementation, testing, deployment & maintenance, online evaluation & evolution.
2) getting the ML problem formulation right
3) key tenets for different stages of application cycle.
Audio for the talk:
https://youtu.be/oBR8flk2TjQ?t=19207
MineExcellence solutions help in designing, optimizing and analyzing how blasts are performing in an integrated manner. We also have a very innovative mobile app - Smart Blasting app available in Android and iPhone. We have products for some other areas in mining such as Drilling, MineSafety App and Operational Analytics.
1. Blast Designer
2. Blast Data Collection and Management (BIMS)
3. Mobile app for Blasting (Smart Blasting)
4. Blasting Predictors – Air and Ground Vibration, Fragmentation and Fly-rock. Pattern Simulation /Analysis
5. Blast Designer and BIMSu for underground blasting
6. Drilling Platform(Drill Log, Plod Reports and Daily activity)
7. Mine Safety APP
8. Operational Analytics : (Combines drilling, blasting, loading, hauling etc)
9. Web and Mobile Custom Forms for all aspects of mining Lifecycle
10. Drone Platform for Mining Operations
Mistakes we make_and_howto_avoid_them_v0.12Trevor Warren
This presentation was put together for the CMGA (www.cmga.org.au) meetup in Canberra (ACT), Australia. It's an attempt to share some of my experiences building and delivering systems over the last decade and a half.
The Zachman Framework can be seen as a classification scheme, whereas the newer Cynefin Framework is seen as a “sense-making” framework. Adriaan Vorster's presentation looks at the features, similarities and examples of the uses for the two frameworks.
Read more about The Open Group SA at http://www.opengroup.co.za/
Building New Data Ecosystem for Customer Analytics, Strata + Hadoop World, 2016Caserta
Caserta Concepts Founder and President, Joe Caserta, gave this presentation at Strata + Hadoop World 2016 in New York, NY. His session covers path-to-purchase analytics using a data lake and spark.
For more information, visit http://casertaconcepts.com/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. &
• The value of forecasting:
– To supplement initial part selection activities
– To support pro-active DMSMS management
– To enable strategic life cycle planning solutions
• Most existing commercial forecasting tools are good at
articulating the current state of a part’s availability and
identifying alternatives, but limited in their capability to
forecast future obsolescence dates.
Why Forecast Obsolescence?
It’s hard to make predictions -
especially about the future.
- Yogi Berra
“
“
4. &
• Ordinal scale approaches – weighted accumulation of
“scores” assigned to a set of predetermined part type,
technology and supply chain attributes.
– Accuracy increases as you get closer to the
obsolescence event
– Historical basis for the forecast is subjective
– Confidence levels and uncertainties are not generally
evaluable
Existing: Ordinal scale approaches
5. &
• Data mining approaches – mapping known part
obsolescence dates to the life cycle curve of the part
type to build vendor-specific (and vendor independent)
forecasting algorithms.
– Used for parts with clearly identifiable parametric drivers,
e.g., memory
– Based on the historical record - Produces accurate part-
type and vendor-specific forecasts
– Forecasts include confidence levels
Existing: Data mining approaches
6. & Obsolescence Forecasting Strategy
Part primary
attribute driven
forecasts
• Historical data driven
• Most accurate
forecasts available for
applicable parts
• Only forecasting
approach that provides
uncertainties or
confidence levels
Procurement
lifetime forecasts
• Used if primary
attributes can’t be
identified
• Historical data driven
• Worst case, vendor
specific, part type
specific, obsolescence
forecast
Short-term
forecasts based
on distributor
inventory levels
• … source counting
and other vendor
provided information
supersede the long-
term forecasts near the
end of a parts
procurement life
THIS PAPER
7. &
• The large electronic part databases are treasure troves
of data for predicting obsolescence, the challenge to
figuring out how to mine the data to find the significant
trends.
• Previously developed data mining approaches work
very well for parts with clear parametric evolutionary
drivers (e.g., memory, microprocessors), but they do
not work for part types that lack these drivers
Objective of this Work
8. &
• Several have postulated that the “age” of electronic
parts is not a factor in determining what gets obsoleted.
– J. Carbone, “Where are the parts,” Purchasing, pp. 44-
47, Dec. 11, 2003.
– S. Clay, “Material Risk Index (MRI) and Methods for
Calculating MRI for Electronic Components,” to be
published IEEE Trans. on Components and Packaging
Technologies, 2009.
• Not so fast! Age appears to play a role in the
obsolescence of many (not all) part types …
The “Age” Effect
12. &
A group of parts introduced on various dates, all discontinued on or
about the same date – common practice
Introduction Date
ProcurementLifetime
Understanding the Graph
Slope = -1
Discontinuance date 1
(longest life parts)
Discontinuance date 2
Top boundary of the wedge
13. &
A group of parts introduced on various dates all having identical
procurement lifetimes, i.e., everything is procurable for exactly y years
Introduction Date
ProcurementLifetime
Understanding the Graph
Slope = 0
y
xx-y
No data points after this
introduction date
Analysis date = x
14. &
If the introduction dates were wrong, e.g., they were all the same
database record creation date d, where d is some point in time after
the parts were introduced.
Introduction Date
ProcurementLifetime
Understanding the Graph
Slope = ∞
(all parts have the same introduction date)
d
15. & Understanding the Graph
Known
wedge
2008
If the data set is complete up to 2008,
nothing could ever fall in this area
Parts that were
introduced in the past
but are not obsolete
yet (note, the top of
the historical record
data need not
correspond to the
boundary of the green
area (it could be
below it). The two will
correspond only if
parts are discontinued
in the analysis year,
e.g., 2008 – lower
green boundary
moves up every year
16. &
The bottom of the wedge is where the critical information is
(not the top).
Introduction Date
ProcurementLifetime
Understanding the Graph
Bottom boundary of the wedge
There is an “age” effect
No “age” effect
Approximate first
part introduction
17. &
Parts with primary parametric evolutionary drivers do not show the
“age” effect. These parts include: memory, microprocessors.
Age Effect Examples
0
4
8
12
16
20
24
28
32
36
1969 1974 1979 1984 1989 1994 1999 2004
Introduction Year
ProcurementLifetime(years)
Flash Memory
Op Amps
No age effect
Flat
Age effect
Not Flat!
Strong parametric evolutionary driver: memory size
18. & Example ‐ Linear Regulators
Worst case forecast for linear regulators
If Introduction Date < 1997.67
Procurement Life > -2.095(introduction date) + 4188.5
If Introduction Date > 1997.67
Procurement Life > -0.1014(introduction date) + 206.77
Obsolescence date = Introduction Date + Procurement Life
23. &
• Worst case, and median vendor specific, part type
specific, obsolescence forecast
– Worst case = no known parts of this type or from this
vendor have had smaller procurement lifetimes
– Vendor specific = the upper limit on the band is the
vendor’s worst case, the lower limit is the part type’s
worst case
– Part type specific = specific to the part type or group of
part types used to create the forecast
What do we really have?
24. &
• Note, the above statement says “part type specific”
NOT “part specific”
– If you give me a specific Fairchild xxxxxx linear regulator,
I can forecast the worst case obsolescence date based
on Fairchild’s history of supporting linear regulators, but I
cannot tell you anything about Fairchild’s specific plans
for the xxxxxx linear regulator
• This methodology is applicable to long-term forecasting
(pro-active and strategic management value).
– Long-term means > 1 year from obsolescence
– Short-term (< 1 year from obsolescence), other factors
kick in
What do we really have?
28. &
• Ordinal Scale Based Obsolescence Forecasting:
– A.L. Henke and S. Lai, “Automated Parts Obsolescence
Prediction,” Proceedings of the DMSMS Conference, 1997.
– C. Josias and J.P. Terpenny, “Component obsolescence risk
assessment,” Proceedings of the 2004 Industrial Engineering
Research Conference (IERC), 2004.
• Data Mining Based Obsolescence Forecasting:
– P. Sandborn, F. Mauro, and R. Knox, "A Data Mining Based
Approach to Electronic Part Obsolescence Forecasting," IEEE
Trans. on Components and Packaging Technologies, Vol. 30,
No. 3, pp. 397-401, September 2007.
http://www.enme.umd.edu/ESCML/Papers/ObsForecastingSe
pt07.pdf
References