Automatic data acquisition systems provide large amounts of streaming data generated by physical sensors. This data forms an input to computational models (soft sensors) routinely used for monitoring and control of industrial processes, traffic patterns, environment and natural hazards, and many more. The majority of these models assume that the data comes in a cleaned and pre-processed form, ready to be fed directly into a predictive model. In practice, to ensure appropriate data quality, most of the modelling efforts concentrate on preparing data from raw sensor readings to be used as model inputs. This study analyzes the process of data preparation for predictive models with streaming sensor data. We present the challenges of data preparation as a four-step process, identify the key challenges in each step, and provide recommendations for handling these issues. The discussion is focused on the approaches that are less commonly used, while, based on our experience, may contribute particularly well to solving practical soft sensor tasks. Our arguments are illustrated with a case study in the chemical production industry.
Episode 53 : Computer Aided Process Engineering
Lecture notes and reading material
* A lecture note covering all the lectures has been prepared (see course home-page)
* Supplementary text-books are listed
* A course home-page has been created
* All lecture and tutorial material can be downloaded from the home-page
http://www.capec.kt.dtu.dk/Courses/MSc-level-Courses/
SAJJAD KHUDHUR ABBAS
Ceo , Founder & Head of SHacademy
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
Design and Simulation of Continuous Distillation ColumnsGerard B. Hawkins
Design and Simulation of Continuous Distillation Columns
0 INTRODUCTION/PURPOSE
1 SCOPE
2 FIELD OF APPLICATION
3 DEFINITIONS
4 FRACTIONAL DISTILLATION
5 ROUGH METHOD OF COLUMN DESIGN
5.1 Sharp Separations
5.2 Sloppy Separations
6 DETAIL DESIGN USING THE CHEMCAD DISTILLATION PROGRAM
6.1 Sharp Separations
6.2 Sloppy Separations
7 COMPLEX COLUMNS
7.1 Multiple Feeds
7.2 Sidestream Take-Offs
8 DESIGN USING A LABORATORY COLUMN
SIMULATION
9 DESIGN USING ACTUAL PLANT DATA
9.1 Uprating or Debottlenecking Exercises
10 REFERENCES
APPENDICES
A WORKED EXAMPLE
B SLOPPY SEPARATIONS
C SIMULATION USING PLANT DATA : CASE HISTORIES
TABLES
IIoT Technology is making pumps smarter. Smart pumps are intelligent devices which are connected to each other through IoT Technology. read more here : https://www.qwentic.com/blog/sensors-for-centrifugal-pump
Episode 53 : Computer Aided Process Engineering
Lecture notes and reading material
* A lecture note covering all the lectures has been prepared (see course home-page)
* Supplementary text-books are listed
* A course home-page has been created
* All lecture and tutorial material can be downloaded from the home-page
http://www.capec.kt.dtu.dk/Courses/MSc-level-Courses/
SAJJAD KHUDHUR ABBAS
Ceo , Founder & Head of SHacademy
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
Design and Simulation of Continuous Distillation ColumnsGerard B. Hawkins
Design and Simulation of Continuous Distillation Columns
0 INTRODUCTION/PURPOSE
1 SCOPE
2 FIELD OF APPLICATION
3 DEFINITIONS
4 FRACTIONAL DISTILLATION
5 ROUGH METHOD OF COLUMN DESIGN
5.1 Sharp Separations
5.2 Sloppy Separations
6 DETAIL DESIGN USING THE CHEMCAD DISTILLATION PROGRAM
6.1 Sharp Separations
6.2 Sloppy Separations
7 COMPLEX COLUMNS
7.1 Multiple Feeds
7.2 Sidestream Take-Offs
8 DESIGN USING A LABORATORY COLUMN
SIMULATION
9 DESIGN USING ACTUAL PLANT DATA
9.1 Uprating or Debottlenecking Exercises
10 REFERENCES
APPENDICES
A WORKED EXAMPLE
B SLOPPY SEPARATIONS
C SIMULATION USING PLANT DATA : CASE HISTORIES
TABLES
IIoT Technology is making pumps smarter. Smart pumps are intelligent devices which are connected to each other through IoT Technology. read more here : https://www.qwentic.com/blog/sensors-for-centrifugal-pump
Easily Identify Sources of Supply Chain GridlockNeo4j
Join us for this 20-minute webinar to hear from Nick Johnson, Product Marketing Manager for Graph Data Science, as he explains the fundamentals of Neo4j Graph Data Science and its applications in optimizing supply chain management. Discover how leveraging graph analytics can help you identify bottlenecks, reduce costs, and streamline your supply chain operations more efficiently.
This is the 1st part of our "All about cleanrooms". This presentation will take you through the history of cleanrooms, types and applications of cleanrooms.
Implementing and Managing Pre-use Post-sterilization Integrity Testing (PUPSIT)Merck Life Sciences
This presentation explores best practices and case studies in aseptic processing, including how to implement and manage PUPSIT. You will learn:
• Integrity Testing – the background on IT itself, why it is important, and how it works
• Filtration setups and single-use technology
• The PUPSIT debate and how PUPSIT can be achieved with current technology, final filling, formulation, filtration
To learn more about this topic or collaborate with our technical experts, schedule a remote visit at our M Lab™ Collaboration Centers: www.merckmillipore.com/remotevisit
Neo4j GraphSummit London March 2023 Emil Eifrem Keynote.pptxNeo4j
Neo4j Founder and CEO Emil Eifrem shares his story on the origins of Neo4j and how graph technology has the potential to answer the world's most important data questions.
Nano Filtration In Water Supply SystemsAqeel Ahamad
Man is completely dependent on water.Hence pure water is essential for many purposes.Though till now many filtration techniques have been introduced so far, using of nano technology make as the purest form of water.
Nanofiltration is a relatively recent membrane filtration process used most often with low total dissolved solids water such as surface water and fresh groundwater, with the purpose of softening ( polyvalent cation removal) and removal of disinfection by-product precursors such as natural organic matter and synthetic organic matter.
Though this paper concentrates on function of nanofiltration,it also elaborates the applications,needs and dis advantages of it.
Turning up the Compen-DIAL: Rapid Test Methods for Cell & Gene TherapiesMerck Life Sciences
Watch the presentation of this webinar here: https://bit.ly/3aeCPNB
Find out how we turn up the dial on quality control testing for cell and gene therapies through rapid methods for sterility, mycoplasma, and replication competent virus. We will review the current regulatory expectations as well as the benefits and limitations that come with each method.
Two of the biggest challenges with applying traditional quality control (QC) test methods to cell and gene therapies, is time to results, due to short shelf-life, and availability of sufficient sample, due to small production volumes.
So how can these challenges be overcome while still meeting regulatory expectations?
In this webinar we will discuss and review suitable methods for rapid testing of short-life cell and gene therapies that may also help conserve limited production material. We will look at benefits, limitations, and regulatory expectations for various QC needs including current and future rapid methods for sterility, mycoplasma and replication competent virus.
In this webinar, you will learn:
• Why the shelf life of a cell or gene therapy product may impact your QC testing strategy
• Current regulatory expectations surrounding rapid methods for sterility, mycoplasma and replication competent virus
• Potential impacts of pursuing a non-optimal QC testing strategy
Episode 55 : Conceptual Process Synthesis-Design
Process Flowsheet Synthesis: Method to determine a process flowsheet that satisfies all product, operational and other requirements
SAJJAD KHUDHUR ABBAS
Ceo , Founder & Head of SHacademy
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
EU GMP Annex 1 Draft: Implications on Sterilizing Grade Filter ValidationMerck Life Sciences
Watch the presentation of this webinar here: https://bit.ly/3kk0Qs1
In this webinar, you will learn:
- About the GMP Annex 1 draft regulatory overview
- How to incorporate the integrity testing & PUPSIT in the filtration systems validation
- How to design a bacterial retention test in terms of organism selection and single vs multiple use validation
Detailed description:
In this webinar we will discuss the implications of the EU GMP Annex 1 draft on the filtration of medicinal products and how this impacts the validation studies.
Bacterial Retention Testing is a critical part of the manufacturing validation process and is required by all regulatory bodies worldwide. Using case studies, our experts will explain how the Annex 1 draft is incorporated into the filtration systems validation exercise, specifically for integrity testing & PUPSIT (Pre-Use Post Sterilization Integrity Testing), the selection and justification of the appropriate test organism, and validation implications of single versus multiple use.
Within the METTLER TOLEDO Group, the Process Analytics division concentrates on analytical measurement solutions for industrial manufacturing processes. The division consists of two business units: Ingold and Thornton, both recognized leaders in their respective markets and technologies.
Ingold is a worldwide leader in pH, dissolved oxygen, CO2, conductivity and turbidity solutions for process analytical measurement systems
in chemical, food & beverage, biotechnology and pharmaceutical industries. Its core competence is high quality in-line measurement of these para- meters in demanding chemical process and hygienic and sterile applications. Thornton is the leader in pure and ultrapure water monitoring instrumentation used in semiconductor, microelectronics, power generation, pharmaceutical, and biotech applications. Its core competence is the in-line measurement of conductivity, resistivity, TOC, bioburden, dissolved oxygen and ozone in determining and controlling water purity.
The division recently expanded into Gas Analytics with a series of TDL analyzers offering unique in situ solutions.
Easily Identify Sources of Supply Chain GridlockNeo4j
Join us for this 20-minute webinar to hear from Nick Johnson, Product Marketing Manager for Graph Data Science, as he explains the fundamentals of Neo4j Graph Data Science and its applications in optimizing supply chain management. Discover how leveraging graph analytics can help you identify bottlenecks, reduce costs, and streamline your supply chain operations more efficiently.
This is the 1st part of our "All about cleanrooms". This presentation will take you through the history of cleanrooms, types and applications of cleanrooms.
Implementing and Managing Pre-use Post-sterilization Integrity Testing (PUPSIT)Merck Life Sciences
This presentation explores best practices and case studies in aseptic processing, including how to implement and manage PUPSIT. You will learn:
• Integrity Testing – the background on IT itself, why it is important, and how it works
• Filtration setups and single-use technology
• The PUPSIT debate and how PUPSIT can be achieved with current technology, final filling, formulation, filtration
To learn more about this topic or collaborate with our technical experts, schedule a remote visit at our M Lab™ Collaboration Centers: www.merckmillipore.com/remotevisit
Neo4j GraphSummit London March 2023 Emil Eifrem Keynote.pptxNeo4j
Neo4j Founder and CEO Emil Eifrem shares his story on the origins of Neo4j and how graph technology has the potential to answer the world's most important data questions.
Nano Filtration In Water Supply SystemsAqeel Ahamad
Man is completely dependent on water.Hence pure water is essential for many purposes.Though till now many filtration techniques have been introduced so far, using of nano technology make as the purest form of water.
Nanofiltration is a relatively recent membrane filtration process used most often with low total dissolved solids water such as surface water and fresh groundwater, with the purpose of softening ( polyvalent cation removal) and removal of disinfection by-product precursors such as natural organic matter and synthetic organic matter.
Though this paper concentrates on function of nanofiltration,it also elaborates the applications,needs and dis advantages of it.
Turning up the Compen-DIAL: Rapid Test Methods for Cell & Gene TherapiesMerck Life Sciences
Watch the presentation of this webinar here: https://bit.ly/3aeCPNB
Find out how we turn up the dial on quality control testing for cell and gene therapies through rapid methods for sterility, mycoplasma, and replication competent virus. We will review the current regulatory expectations as well as the benefits and limitations that come with each method.
Two of the biggest challenges with applying traditional quality control (QC) test methods to cell and gene therapies, is time to results, due to short shelf-life, and availability of sufficient sample, due to small production volumes.
So how can these challenges be overcome while still meeting regulatory expectations?
In this webinar we will discuss and review suitable methods for rapid testing of short-life cell and gene therapies that may also help conserve limited production material. We will look at benefits, limitations, and regulatory expectations for various QC needs including current and future rapid methods for sterility, mycoplasma and replication competent virus.
In this webinar, you will learn:
• Why the shelf life of a cell or gene therapy product may impact your QC testing strategy
• Current regulatory expectations surrounding rapid methods for sterility, mycoplasma and replication competent virus
• Potential impacts of pursuing a non-optimal QC testing strategy
Episode 55 : Conceptual Process Synthesis-Design
Process Flowsheet Synthesis: Method to determine a process flowsheet that satisfies all product, operational and other requirements
SAJJAD KHUDHUR ABBAS
Ceo , Founder & Head of SHacademy
Chemical Engineering , Al-Muthanna University, Iraq
Oil & Gas Safety and Health Professional – OSHACADEMY
Trainer of Trainers (TOT) - Canadian Center of Human
Development
EU GMP Annex 1 Draft: Implications on Sterilizing Grade Filter ValidationMerck Life Sciences
Watch the presentation of this webinar here: https://bit.ly/3kk0Qs1
In this webinar, you will learn:
- About the GMP Annex 1 draft regulatory overview
- How to incorporate the integrity testing & PUPSIT in the filtration systems validation
- How to design a bacterial retention test in terms of organism selection and single vs multiple use validation
Detailed description:
In this webinar we will discuss the implications of the EU GMP Annex 1 draft on the filtration of medicinal products and how this impacts the validation studies.
Bacterial Retention Testing is a critical part of the manufacturing validation process and is required by all regulatory bodies worldwide. Using case studies, our experts will explain how the Annex 1 draft is incorporated into the filtration systems validation exercise, specifically for integrity testing & PUPSIT (Pre-Use Post Sterilization Integrity Testing), the selection and justification of the appropriate test organism, and validation implications of single versus multiple use.
Within the METTLER TOLEDO Group, the Process Analytics division concentrates on analytical measurement solutions for industrial manufacturing processes. The division consists of two business units: Ingold and Thornton, both recognized leaders in their respective markets and technologies.
Ingold is a worldwide leader in pH, dissolved oxygen, CO2, conductivity and turbidity solutions for process analytical measurement systems
in chemical, food & beverage, biotechnology and pharmaceutical industries. Its core competence is high quality in-line measurement of these para- meters in demanding chemical process and hygienic and sterile applications. Thornton is the leader in pure and ultrapure water monitoring instrumentation used in semiconductor, microelectronics, power generation, pharmaceutical, and biotech applications. Its core competence is the in-line measurement of conductivity, resistivity, TOC, bioburden, dissolved oxygen and ozone in determining and controlling water purity.
The division recently expanded into Gas Analytics with a series of TDL analyzers offering unique in situ solutions.
Unit-6: Gyroscope, of Dynamics of machines of VTU Syllabus prepared by Hareesha N Gowda, Asst. Prof, Dayananda Sagar College of Engg, Blore. Please write to hareeshang@gmail.com for suggestions and criticisms.
A solar tree is a decorative means of producing solar energy and also electricity. It uses multiple no of solar panels which forms the shape of a tree. The panels are arranged in a tree fashion in a tall tower/pole.
TREE stands for
T= TREE GENERATING
R=RENEWABLE
E=ENERGY and
E=ELECTRICITY
This is like a tree in structure and the panels are like leaves of the tree which produces energy.
this is the ppt on 2 stroke and 4 stroke petrol engine. . i made this ppt with the help of dhrumil patel .who is in the L.D. college of engineering in chemical department. . i am very thankful to him for being my great partner. . .thanx dhrumil..
5 Practical Steps to a Successful Deep Learning ResearchBrodmann17
Deep Learning has gained a huge popularity over the last several years. Especially due to its magnificent progress in many domains.
Many resources are out there including open source implementations of recent research advancements. This vast availability is somehow misleading because when one actually wants to create a Deep Learning based product, he soon realizes that there is a large gap between these open source implementations and a real production grade Deep Learning product. Closing this gap can take months of work involving large costs, especially on man power and compute power.
Throughout this talk I will talk based on my experience leading the research at Brodmann17 about several aspects we have found to be important for building Deep Learning based computer vision products.
Use of Machine Intelligence in manufacturing industry poses a special challenge due to a wide range of use cases, inherent complexity in data collection, availability of information and disconnect between information islands in different manufacturing steps.
Within our talk we present several machine intelligence projects we did in the manufacturing industry, which helped our customers in product quality improvement, reduction of cost and better asset management. We will talk about the used methodologies, the results achieved and the lessons learned from these projects. We will specifically focus on the importance of process and business knowledge for successful implementation of any industrial project.
Improving continuous process operation using data analytics delta v applicati...Emerson Exchange
Quality parameters are available through lab measurements and the final product quality changes may go undetected until a lab sample is taken. Continuous data analytics tool provided on-line prediction of quality parameters and fault detection. Field trial results from a carbon dioxide absorption/stripping process at the UT/Austin Separations Research Program will be presented in this workshop.
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
Artificial intelligence based pattern recognition is
one of the most important tools in process control to identify
process problems. The objective of this study was to
evaluate the relative performance of a feature-based
Recognizer compared with the raw data-based recognizer.
The study focused on recognition of seven commonly
researched patterns plotted on the quality chart. The
artificial intelligence based pattern recognizer trained using
the three selected statistical features resulted in significantly
better performance compared with the raw data-based
recognizer.
Cada vez estamos viendo más y más aparatos "inteligentes" se cuelan en nuestros hogares. Ya sean bombillas, enchufes o sensores de temperatura, lo que tienen casi todos en común es la necesidad de estar conectados a un hub central específico de cada marca así como una aplicación móvil para configurarlos. Además, una de las características más anunciada es su integración con Alexa o Google Assistant. Todo esto implica por lo general que nuestros aparatos van a estar conectados permanentemente a internet y enviando nuestros datos a una o mútiples compañías.
¿Existen alternativas para disfrutar de esta tecnología en casa sin "hipotecarnos" con los dispositivos de una compañía concreta y para tener el control de nuestros datos? En esta charla contaré mi experiencia domotizando mi casa manteniendo la privacidad.
Automatizando el aprendizaje basado en datosManuel Martín
En los últimos años ha habido un creciente interés en extraer información útil de grandes cantidades
de datos. Esta información se puede usar para hacer predicciones a futuro o inferir valores
desconocidos. Existen una gran variedad de modelos predictivos para problemas de clasificación y
regresión. Sin embargo, en muchas investigaciones se asume a menudo que los datos están limpios
y se presta poca atención al preprocesamiento de los datos. A pesar de que hay muchos métodos
para solventar tareas de preprocesamiento específicas (por ejemplo, detección de valores extremos o
selección de atributos), el esfuerzo para realizar el preprocesamiento y limpiado de los datos puede
llevar entre el 60% y el 80% de todo el tiempo empleado en el proceso de minería de datos. Se hace por tanto muy necesario automatizar todo o parte de este proceso. En esta charla se da un introducción a la automatización de la selección y optimización de múltiples
métodos de preprocesamiento y predicción.
Modelling Multi-Component Predictive Systems as Petri NetsManuel Martín
Building reliable data-driven predictive systems requires a considerable amount of human effort, especially in the data preparation and cleaning phase. In many application domains, multiple preprocessing steps need to be applied in sequence, constituting a `workflow' and facilitating reproducibility. The concatenation of such workflow with a predictive model forms a Multi-Component Predictive System (MCPS). Automatic MCPS composition can speed up this process by taking the human out of the loop, at the cost of model transparency (i.e. not being comprehensible by human experts). In this paper, we adopt and suitably re-define the Well-handled with Regular Iterations Work Flow (WRI-WF) Petri nets to represent MCPSs. The use of such WRI-WF nets helps to increase the transparency of MCPSs required in industrial applications and make it possible to automatically verify the composed workflows. We also present our experience and results of applying this representation to model soft sensors in chemical production plants.
Brand engagement with mobile gamification apps from a developer perspectiveManuel Martín
Excelling at what your company offer is often synonymous of success, but having a loyal customer base is not easy. Applying gamification elements to products or services can help brands to keep customers engaged, but it's not exempt of risks. This talk will present an introduction to gamification and will show success stories, specially focusing on apps promoting a positive behaviour change. Manuel will also share some lessons learned from app development and what opportunities gamification can bring to multiple disciplines.
Effects of change propagation resulting from adaptive preprocessing in multic...Manuel Martín
Predictive modelling is a complex process that requires a number of steps to transform raw data into predictions. Preprocessing of the input data is a key step in such process, and the selection of proper preprocessing methods is often a labour intensive task. Such methods are usually trained offline and their parameters remain fixed during the whole model deployment lifetime. However, preprocessing of non-stationary data streams is more challenging since the lack of adaptation of such preprocessing methods may degrade system performance. In addition, dependencies between different predictive system components make the adaptation process more challenging. In this paper we discuss the effects of change propagation resulting from using adaptive preprocessing in a Multicomponent Predictive System (MCPS). To highlight various issues we present four scenarios with different levels of adaptation. A number of experiments have been performed with a range of datasets to compare the prediction error in all four scenarios. Results show that well managed adaptation considerably improves the prediction performance. However, the model can become inconsistent if adaptation in one component is not correctly propagated throughout the rest of system components. Sometimes, such inconsistency may not cause an obvious deterioration in the system performance, therefore being difficult to detect. In some other cases it may even lead to a system failure as was observed in our experiments.
Improving transport timetables usability for mobile devicesManuel Martín
The increasing number of passengers using mobile devices like smartphones or tablets in last few years have motivated transport companies to develop mobile websites and apps for their customers. However, the transition from desktop to mobile versions is challenging and many websites are still not optimised for user experience on such devices. In this paper we present a usability study carried out with the timetables of Nottingham City Transport website. A number of design changes have improved the overall user experience as confirmed by the results.
Automating Machine Learning - Is it feasible?Manuel Martín
Facing a machine learning problem for the first time can be overwhelming. Hundreds of methods exist for tackling problems such as classification, regression or clustering. Selecting the appropriate method is challenging, specially if no much prior knowledge is known. In addition, most models require to optimise a number of hyperparameters to perform well. Preparing the data for the learning algorithm is also a labour-intensive process that includes cleaning outliers and imperfections, feature selection, data transformation like PCA and more. A workflow connecting preprocessing methods and predictive models is called a multicomponent predictive system (MCPS). This talk introduces the problem of automating the composition and optimisation of MCPSs and also how they can be adapted in changing environments.
Towards Automatic Composition of Multicomponent Predictive SystemsManuel Martín
Automatic composition and parametrisation of multicomponent predictive systems (MCPSs) consisting of chains of data transformation steps is a challenging task. In this paper we propose and describe an extension to the Auto-WEKA software which now allows to compose and optimise such flexible MCPSs by using a sequence of WEKA methods. In the experimental analysis we focus on examining the impact of significantly extending the search space by incorporating additional hyperparameters of the models, on the quality of the found solutions. In a range of extensive experiments three different optimisation strategies are used to automatically compose MCPSs on 21 publicly available datasets. A comparison with previous work indicates that extending the search space improves the classification accuracy in the majority of the cases. The diversity of the found MCPSs are also an indication that fully and automatically exploiting different combinations of data cleaning and preprocessing techniques is possible and highly beneficial for different predictive models. This can have a big impact on high quality predictive models development, maintenance and scalability aspects needed in modern application and deployment scenarios.
Online Detection of Shutdown Periods in Chemical Plants: A Case StudyManuel Martín
In process industry, chemical processes are controlled and monitored by using readings from multiple physical sensors across the plants. Such physical sensors are also supplemented by soft sensors, i.e. adaptive predictive models, which are often used for computing hard-to-measure variables of the process. For soft sensors to work well and adapt to changing operating conditions they need to be provided with relevant data. As production plants are regularly stopped, data instances generated during shutdown periods have to be identified to avoid updating these predictive models with wrong data. We present a case study concerned with a large chemical plant operation over a 2 years period. The task is to robustly and accurately identify the shutdown periods even in case of multiple sensor failures. State-of-the-art methods were evaluated using the first half of the dataset for calibration purposes and the other half for measuring the performance. Results show that shutdowns (i.e. sudden changes) can be quickly detected in any case but the detection delay of startups (i.e. gradual changes) is directly related with the choice of a window size.
Artificial Intelligence for Automating Data AnalysisManuel Martín
The requirements for analysing big volumes of data have increased over the last few decades. The process of selecting, cleaning, modelling and interpreting data is called the KDD process. The decision of how to approach each step in this process has often been made manually by experts. However, experts cannot be aware of all methods, nor is it feasible to try all of them. Researchers have proposed different approaches for automating, or at least advising, the stages of the KDD process. This talk will outline the different types of Intelligent Discovery Assistants as described in the work of Serban et al. “A survey of intelligent assistants for data analysis” and point out some future directions.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
From sensor readings to prediction: on the process of developing practical soft sensors
1. From sensor readings to prediction: on the
process of developing practical soft sensors
Marcin Budka1, Mark Eastwood2, Bogdan Gabrys1, Petr Kadlec3, Manuel Martin Salvador1,
Stephanie Schwan3, Athanasios Tsakonas1, Indre Zliobaite4
1Bournemouth University, UK
2Coventry University, UK
3Evonik Industries, Germany
4Aalto University and HIIT, Finland IDA 2014. Leuven, Belgium
2. Outline
1. INFER project
2. Sensors, sensors, sensors
3. Easy vs difficult
4. Soft Sensors
4.1. Soft Sensors: models
4.2. Soft Sensors in the Process Industry
4.3. An unsuccessful soft sensor
4.4. A successful soft sensor
4.5. How to build a successful data-driven soft sensor?
4.5.1. Performance goal and evaluation criteria
4.5.2. Data Analysis
4.5.3. Data Preparation and Pre-processing
4.5.4. Training and validation
5. Our case study
5.1. Versions of the data
5.2. Evaluation
6. Conclusion
5. Sensors, sensors, sensors
SSEENNSSOORRSS
Image copyright by Disney Pixar. Qualifies fair usage.
SSEENNSSOORRSS EEVVEERRYYWWHHEERREE
6. Easy vs difficult
Easy-to-measure variables Difficult-to-measure variables
Temperature
Polymerisation progress
Pressure
Humidity
Flow
Fermentation progress
Concentration
7. Soft Sensors
Soft sensors are computational models
that aggregate readings of physical sensors
Soft sensors operate online using streams of sensor readings,
therefore they need to be robust to noise
and adaptive to changes over time.
8. Soft Sensors: models
First principle models Data-driven models
Based on physical and
chemical process
knowledge
Usually focus on ideal
states of the process
Process knowledge is not
available
Such knowledge can be
extracted from the data
(Machine Learning algorithms)
y=temp + press/2 - flow2
Linear Regression
PLS regression
Support Vector Machines
π
9. Soft Sensors in the Process Industry
Main areas of application
1. Online prediction of a difficult-to-measure variable
2. Inferential control in the process control loop
3. Multivariate process monitoring for determining the process state
4. Hardware sensor backup
10. An unsuccessful soft sensor
Image copyright by Disney Pixar. Qualifies fair usage.
11. A successful soft sensor
Implemented into the process online environment
Accepted by the process operators
Requirements:
• Reasonable performance
• Stable
• Predictable
• Transparency
• Automation
• Robustness
• Adaptivity
12. A successful soft sensor
Image copyright by Disney Pixar. Qualifies fair usage.
13. How to build a successful
data-driven soft sensor?
Proposed framework:
1) Setting up the performance goals and evaluation criteria
2) Data analysis (exploratory)
3) Data preparation and preprocessing
4) Training and validating the predictive model
Keep domain expert in the loop from the beginning
14. 1. Performance goals and evaluation criteria
Performance goal examples:
● Classification accuracy > 85%
● Processing time per sample < 1s
Evaluation criteria:
● Qualitative evaluation:
● Transparency
● Model complexity
● Quantitative evaluation:
● RMSE
● MAE
● Jitter
● Confidence
18. 3. Data Preparation and Pre-processing
1. Physical
constraints
2. Univariate
statistical tests
for individual
sensors
3. Multivariate
statistical tests
for all variables
together
4. Missing values
19. 3. Data Preparation and Pre-processing
✔If outliers=noise,
replace them
with missing
values imputation
techniques
20. 3. Data Preparation and Pre-processing
✔Discretization
✔Derive new
variables
✔Data scaling
✔Data rotation
23. 4. Training and Validation
Training set for tuning
pre-processing methods
and building the model
Testing set for
evaluating the model
24. Our case study
Background picture is Creative Commons by Paul Joyce
Real industrial dataset from a debutanizer column
3 years of operation
189,193 records (every 5 min)
85 sensors
Target: concentration of the product
25. Versions of the data
Code Description
RAW no pre-processing (188752 training / 21859 testing)
SUB subsampling (every 1h – 15611 training / 1822 testing)
SYN features are synchronised
FET-E 20 features selected using the first 1000 training samples
FET-L 20 features selected using the latest 1000 training samples
FRA additional features derived by computing the fractal dimension
DIF original values are replaced with the first derivative with respect
to time
26. Evaluation
Partial Least Squares regression → transparency
MAE = Mean Absolute Error
Data #1 MAE #1 Data #2 MAE #2 % improvement
RAW 225 RAW-SYN 222 1%
SUB 227 SUB-SYN 221 3%
RAW-FET-E 228 RAW-FET-L 198 13%
RAW-SYN-FET-E 245 RAW-SYN-FET-L 201 18%
SUB-FET-E 236 SUB-FET-L 193 18%
SUB-SYN-FET-E 215 SUB-SYN-FET-L 185 14%
SUB-DIF 41.8 SUB-DIF-SYN 35.3 16%
SUB-DIF 41.8 SUB-DIF-FRA 32.4 22%
27. Evaluation (cont.)
● Feature synchronization can have positive or negative effect
in prediction
● Adaptive feature selection using the latest samples is
beneficial → Feature importance change over time
● Taking into account temporal differences is very beneficial
→ Product concentration does not change suddenly
28. Conclusion
✔Framework for building a successful soft sensor
✔Case study with real data from industrial production process
✔Adaptive pre-processing could be very beneficial (and
sometimes a must)
Future directions:
Extend feature space with autoregressive features
Filter out the effects of data compression
Ongoing work:
Automation and adaptation of data stream pre-processing