This document provides examples of applying rules of thumb to interpret exposure monitoring data and assign exposure risk categories. It shows how to determine the median, calculate multiples of the median, and compare them to the occupational exposure limit to estimate the 95th percentile exposure and assign a category. Categories range from 1 to 4, with higher numbers indicating greater risk. The examples demonstrate assigning categories based on whether any exposures exceed limits or guidelines. The rules of thumb are meant to aid quick decision making when only limited monitoring data is available.
The document discusses the importance of using statistics to make accurate exposure risk decisions, as individual exposure measurements can vary significantly. It emphasizes targeting at least 95% confidence that the true 95th percentile exposure is less than the occupational exposure limit. The document also outlines the AIHA exposure rating categories and the types of controls recommended based on where exposures fall within those categories.
This document outlines statistical quality control techniques for evaluating manufacturing and service processes. It discusses measuring and controlling process variation using variables like mean, standard deviation and control charts. Key aspects covered include process capability analysis using metrics like Cpk, acceptance sampling plans to determine quality levels while balancing producer and consumer risks, and operating characteristic curves.
This document provides an introduction to business intelligence and data analytics. It discusses key concepts such as data sources, data warehouses, data marts, data mining, and data analytics. It also covers topics like univariate analysis, measures of dispersion, heterogeneity measures, confidence intervals, cross validation, and ROC curves. The document aims to introduce fundamental techniques and metrics used in business intelligence and data mining.
M3_Statistics foundations for business analysts_Presentation.pdfACHALSHARMA52
This document provides an overview of key probability concepts including sample space, events, addition law, probability distributions, discrete vs continuous random variables, and common probability distributions such as binomial, Poisson, uniform, normal and exponential. Examples are provided to illustrate concepts such as calculating probabilities and determining parameters of different distributions. The document would help introduce someone to fundamental probability topics.
Eugm 2012 pritchett - application of adaptive sample size re-estimation in ...Cytel USA
This document outlines the application of adaptive sample size re-estimation in event-driven confirmatory clinical trials. It provides background on learning versus confirmatory trials and discusses a case study using change in albuminuria to predict cardiovascular and renal outcomes. The document then describes an adaptive design that would re-estimate the sample size at an interim analysis if the conditional power fell within a "promising zone", in order to ensure adequate power to detect an effect while avoiding unnecessary sample size increases. Simulations are used to evaluate different choices of conditional power boundaries and their impact on operating characteristics.
This document provides 6 examples of exposure assessment data analyzed using the IHDA-AIHA and Expostats statistical tools. For each example, the tools provide parameter estimates, goodness of fit plots, and recommendations on assigning an exposure control category and certainty level based on where the sample 95th percentile and 95% UCL fall relative to the occupational exposure limit. The examples demonstrate how the tools can be used to interpret industrial hygiene monitoring data and make risk-based decisions.
This document provides examples of exposure assessment data analyzed using the IHDA-AIHA and Expostats statistical tools. For each example, it shows the sample monitoring results, parameter estimates, goodness-of-fit plots from both tools, and the final exposure rating, certainty level, and acceptability decision based on the Exposure Control Categories. It demonstrates how both tools can be used to evaluate the data, compare the results, and inform decisions about exposure controls and follow-up actions.
This document discusses various statistical tools used in decision making, including regression analysis, confidence intervals, comparison tests, and analysis of variance. It provides examples of how regression analysis can be used to determine correlations and unknown parameters. It also explains how confidence intervals are calculated and used to determine how reliable a sample statistic is in estimating an unknown population parameter. Comparison tests are outlined as a method to determine if one process or supplier is better than another.
The document discusses the importance of using statistics to make accurate exposure risk decisions, as individual exposure measurements can vary significantly. It emphasizes targeting at least 95% confidence that the true 95th percentile exposure is less than the occupational exposure limit. The document also outlines the AIHA exposure rating categories and the types of controls recommended based on where exposures fall within those categories.
This document outlines statistical quality control techniques for evaluating manufacturing and service processes. It discusses measuring and controlling process variation using variables like mean, standard deviation and control charts. Key aspects covered include process capability analysis using metrics like Cpk, acceptance sampling plans to determine quality levels while balancing producer and consumer risks, and operating characteristic curves.
This document provides an introduction to business intelligence and data analytics. It discusses key concepts such as data sources, data warehouses, data marts, data mining, and data analytics. It also covers topics like univariate analysis, measures of dispersion, heterogeneity measures, confidence intervals, cross validation, and ROC curves. The document aims to introduce fundamental techniques and metrics used in business intelligence and data mining.
M3_Statistics foundations for business analysts_Presentation.pdfACHALSHARMA52
This document provides an overview of key probability concepts including sample space, events, addition law, probability distributions, discrete vs continuous random variables, and common probability distributions such as binomial, Poisson, uniform, normal and exponential. Examples are provided to illustrate concepts such as calculating probabilities and determining parameters of different distributions. The document would help introduce someone to fundamental probability topics.
Eugm 2012 pritchett - application of adaptive sample size re-estimation in ...Cytel USA
This document outlines the application of adaptive sample size re-estimation in event-driven confirmatory clinical trials. It provides background on learning versus confirmatory trials and discusses a case study using change in albuminuria to predict cardiovascular and renal outcomes. The document then describes an adaptive design that would re-estimate the sample size at an interim analysis if the conditional power fell within a "promising zone", in order to ensure adequate power to detect an effect while avoiding unnecessary sample size increases. Simulations are used to evaluate different choices of conditional power boundaries and their impact on operating characteristics.
This document provides 6 examples of exposure assessment data analyzed using the IHDA-AIHA and Expostats statistical tools. For each example, the tools provide parameter estimates, goodness of fit plots, and recommendations on assigning an exposure control category and certainty level based on where the sample 95th percentile and 95% UCL fall relative to the occupational exposure limit. The examples demonstrate how the tools can be used to interpret industrial hygiene monitoring data and make risk-based decisions.
This document provides examples of exposure assessment data analyzed using the IHDA-AIHA and Expostats statistical tools. For each example, it shows the sample monitoring results, parameter estimates, goodness-of-fit plots from both tools, and the final exposure rating, certainty level, and acceptability decision based on the Exposure Control Categories. It demonstrates how both tools can be used to evaluate the data, compare the results, and inform decisions about exposure controls and follow-up actions.
This document discusses various statistical tools used in decision making, including regression analysis, confidence intervals, comparison tests, and analysis of variance. It provides examples of how regression analysis can be used to determine correlations and unknown parameters. It also explains how confidence intervals are calculated and used to determine how reliable a sample statistic is in estimating an unknown population parameter. Comparison tests are outlined as a method to determine if one process or supplier is better than another.
The learning outcomes of this topic are:
- Recognize the terms sample statistic and population parameter
- Use confidence intervals to indicate the reliability of estimates
- Know when approximate large sample or exact confidence intervals are appropriate
This topic will cover:
- Sampling distributions
- Point estimates and confidence intervals
- Introduction to hypothesis testing
Six Sigma aims to reduce defects and variation in processes to improve customer satisfaction and increase profits. It originated to improve manufacturing but was later applied more broadly. The Six Sigma process involves Define, Measure, Analyze, Improve, and Control (DMAIC) to first identify issues and their causes, test solutions, and ensure gains are sustained. Key aspects include defining critical metrics, measuring current performance, analyzing processes statistically to determine root causes of defects, improving processes by addressing these causes, and controlling processes ongoing to maintain results.
This document provides an overview of different types of variables and methods for summarizing clinical data, including descriptive statistics. It discusses categorical variables like gender and ordinal variables like disease staging. For continuous variables it explains measures of central tendency like mean, median and mode, and measures of variation like range, standard deviation, and interquartile range. Graphs for summarizing univariate data are also covered, such as bar charts for categorical variables and histograms and box plots for continuous variables.
Software Defect Repair Times: A Multiplicative Modelgregoryg
The document discusses modeling software defect repair times using a multiplicative lognormal model. It proposes that seven hypothetical factors affecting repair time, such as bug severity, difficulty, and engineer skills, could combine in a multiplicative way to generate the observed lognormal distribution of repair times in software defect data. The document compares fitting exponential, lognormal and Laplace transform of lognormal distributions to defect repair time data from nine software product families, finding the Laplace transform of lognormal model provides the best fit. It discusses implications for management, such as targeting training and tools to reduce variability in factors affecting repair of high severity defects.
Six Sigma Methods and Formulas for Successful Quality ManagementRSIS International
This paper is about the five phases of Six Sigma which are Define, Measure, Analyze, Improve& Control. The methods used in each phase are discussed in detail and the various tests used in Analyze Phase of Six Sigma are given; Six Sigma can be implemented in an organization by using the methods and formulas used in each phase combined with the help of Statistical Software Minitab 18.
This document provides an introduction to Six Sigma, including:
1) It describes Six Sigma as a data-driven methodology used for process improvement and achieving high quality standards.
2) It explains key Six Sigma concepts like the DMAIC model, sigma levels, variation, the normal distribution, and the Pareto principle.
3) It discusses how to create a Pareto chart in Excel to identify the most impactful causes of problems based on frequency of occurrence. Creating lower level charts can help identify root causes.
This document describes a study to improve rice milling quality using a design of experiments approach. The study tested different settings for roller speed and clearance in rice whitening machines. It found that setting the roller speed to 1050 rpm and clearance to 1.5 times the grain size maximized head rice yield. This combination produced a head rice yield of 67% with low variation, an improvement over the previous yield of 56% from inconsistent manual adjustments. The optimized settings reduced costs by eliminating shutdowns during setup and lowering defect rates, saving an estimated 20.45% of total processing costs per run.
This document discusses process validation for injection molding and provides guidance on key challenges and statistical methods to address them. It covers establishing a process window through design of experiments (DOE) to understand significant variables. Measurement system analysis (MSA) is discussed to ensure reliable measurements. The document emphasizes using adequate sample sizes and applying statistical confidence levels when assessing process capability to minimize risk. The overarching message is to consider product and patient risk, focus on intended outcomes, and avoid jumping to conclusions without proper statistical analysis.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
The Normal Distribution is a symmetrical probability distribution where most results are located in the middle and few are spread on both sides. It has the shape of a bell and can entirely be described by its mean and standard deviation.
Okay, here are the steps to convert each score to a z-score:
For history test:
Z = (X - Mean) / Standard Deviation
Z = (78 - 79) / 6
Z = -0.167
For math test:
Z = (X - Mean) / Standard Deviation
Z = (82 - 84) / 5
Z = 0.8
So the z-score for the history test is -0.167 and the z-score for the math test is 0.8.
This document provides details about a course on random variables and stochastic processes. It includes:
- An overview of the course content which will cover probability theory, random variables, distributions, and stochastic processes.
- Information about assignments, quizzes, grading policy, textbooks, and the instructor's office hours.
- Examples and explanations of key concepts from probability theory that will be covered, including sample spaces, probability values, events, and complements of events. Applications to games of chance, software errors, and power plant operations are discussed.
- The goal of developing mathematical tools to analyze and characterize random signals and stochastic processes is stated.
This document provides an overview of statistical process control (SPC). It discusses key SPC concepts including:
1) SPC focuses on detecting and eliminating abnormal variations (assignable causes) to achieve consistent quality.
2) SPC requires knowledge of basic statistics, variation, histograms, process capability, and control charts. Control charts are used to monitor a process and detect when assignable causes result in variations outside the natural limits.
3) A histogram provides a visual representation of a process and can indicate if a process is capable and centered on the target, or if assignable causes are present.
When fitting loss data (insurance) to a distribution, often the parameters that provide a good overall fit will understate the density in the tail.
This method allows one to split the distribution into 2 portions, and use a Pareto distribution to fit the tail.
Presented at the CAS Spring Meeting in Seattle, May 2016.
This document discusses various topics related to report writing including findings, conclusions, recommendations, types of reports, report sections, and explores common myths about reports. It provides examples of different sections within reports including an executive summary, company overview, factors for analysis and methodology. The summaries focus on conveying the high-level purpose or content of the different sections while keeping the summary brief.
4. Update on non-invasive prenatal testingPHEScreening
This document discusses screening performance for trisomies 21, 18 and 13 using combined tests and NIPT, as well as expected choice behavior under current and proposed screening pathways. Key points include:
- Combined testing detects around 2.2% of trisomy 21 cases with a false positive rate of 0.4%
- Validation studies show combined testing detects 62-72% of trisomies depending on risk cut-off
- Modeling suggests offering NIPT would result in fewer invasive tests for unaffected pregnancies but more overall testing
- Positive predictive values for NIPT are high, especially for those in higher risk groups
- Changes to NIPT cut-offs could balance detecting more cases with fewer follow
Effects of A Simulated Power Cut in AMS on Milk Yield Valued by Statistics ModelIJERA Editor
A statistics model was developed in order to be able to determine the effects of a simulated power cut of an
Automatic Milking System on the milk output.Measurable and relevant factors, such as power cuts, milk yield,
lactation days, average two days digestion and rumination and time were considered in the calculation tool.
Estimating sample size through simulationsArthur8898
Determining sample size is one critical and important procedure for designing an experiment. The sample size for most statistical models can be easily calculated by using the POWER procedure. However, the PROC POWER cannot be used for a complicated statistical model. This paper reviews a more generalized method to estimate the sample size through a simulation approach by using SAS® software. The simulation approach not only applies to the simple but also to a more complex statistical design.
This document provides an overview of statistical process control (SPC) concepts including control charts, process capability, and applying SPC to services. It discusses control charts for attributes like p-charts and c-charts and control charts for variables like x-bar charts and R-charts. It also covers determining control limits, identifying patterns in control charts, and using Excel for SPC.
The learning outcomes of this topic are:
- Recognize the terms sample statistic and population parameter
- Use confidence intervals to indicate the reliability of estimates
- Know when approximate large sample or exact confidence intervals are appropriate
This topic will cover:
- Sampling distributions
- Point estimates and confidence intervals
- Introduction to hypothesis testing
Six Sigma aims to reduce defects and variation in processes to improve customer satisfaction and increase profits. It originated to improve manufacturing but was later applied more broadly. The Six Sigma process involves Define, Measure, Analyze, Improve, and Control (DMAIC) to first identify issues and their causes, test solutions, and ensure gains are sustained. Key aspects include defining critical metrics, measuring current performance, analyzing processes statistically to determine root causes of defects, improving processes by addressing these causes, and controlling processes ongoing to maintain results.
This document provides an overview of different types of variables and methods for summarizing clinical data, including descriptive statistics. It discusses categorical variables like gender and ordinal variables like disease staging. For continuous variables it explains measures of central tendency like mean, median and mode, and measures of variation like range, standard deviation, and interquartile range. Graphs for summarizing univariate data are also covered, such as bar charts for categorical variables and histograms and box plots for continuous variables.
Software Defect Repair Times: A Multiplicative Modelgregoryg
The document discusses modeling software defect repair times using a multiplicative lognormal model. It proposes that seven hypothetical factors affecting repair time, such as bug severity, difficulty, and engineer skills, could combine in a multiplicative way to generate the observed lognormal distribution of repair times in software defect data. The document compares fitting exponential, lognormal and Laplace transform of lognormal distributions to defect repair time data from nine software product families, finding the Laplace transform of lognormal model provides the best fit. It discusses implications for management, such as targeting training and tools to reduce variability in factors affecting repair of high severity defects.
Six Sigma Methods and Formulas for Successful Quality ManagementRSIS International
This paper is about the five phases of Six Sigma which are Define, Measure, Analyze, Improve& Control. The methods used in each phase are discussed in detail and the various tests used in Analyze Phase of Six Sigma are given; Six Sigma can be implemented in an organization by using the methods and formulas used in each phase combined with the help of Statistical Software Minitab 18.
This document provides an introduction to Six Sigma, including:
1) It describes Six Sigma as a data-driven methodology used for process improvement and achieving high quality standards.
2) It explains key Six Sigma concepts like the DMAIC model, sigma levels, variation, the normal distribution, and the Pareto principle.
3) It discusses how to create a Pareto chart in Excel to identify the most impactful causes of problems based on frequency of occurrence. Creating lower level charts can help identify root causes.
This document describes a study to improve rice milling quality using a design of experiments approach. The study tested different settings for roller speed and clearance in rice whitening machines. It found that setting the roller speed to 1050 rpm and clearance to 1.5 times the grain size maximized head rice yield. This combination produced a head rice yield of 67% with low variation, an improvement over the previous yield of 56% from inconsistent manual adjustments. The optimized settings reduced costs by eliminating shutdowns during setup and lowering defect rates, saving an estimated 20.45% of total processing costs per run.
This document discusses process validation for injection molding and provides guidance on key challenges and statistical methods to address them. It covers establishing a process window through design of experiments (DOE) to understand significant variables. Measurement system analysis (MSA) is discussed to ensure reliable measurements. The document emphasizes using adequate sample sizes and applying statistical confidence levels when assessing process capability to minimize risk. The overarching message is to consider product and patient risk, focus on intended outcomes, and avoid jumping to conclusions without proper statistical analysis.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
The Normal Distribution is a symmetrical probability distribution where most results are located in the middle and few are spread on both sides. It has the shape of a bell and can entirely be described by its mean and standard deviation.
Okay, here are the steps to convert each score to a z-score:
For history test:
Z = (X - Mean) / Standard Deviation
Z = (78 - 79) / 6
Z = -0.167
For math test:
Z = (X - Mean) / Standard Deviation
Z = (82 - 84) / 5
Z = 0.8
So the z-score for the history test is -0.167 and the z-score for the math test is 0.8.
This document provides details about a course on random variables and stochastic processes. It includes:
- An overview of the course content which will cover probability theory, random variables, distributions, and stochastic processes.
- Information about assignments, quizzes, grading policy, textbooks, and the instructor's office hours.
- Examples and explanations of key concepts from probability theory that will be covered, including sample spaces, probability values, events, and complements of events. Applications to games of chance, software errors, and power plant operations are discussed.
- The goal of developing mathematical tools to analyze and characterize random signals and stochastic processes is stated.
This document provides an overview of statistical process control (SPC). It discusses key SPC concepts including:
1) SPC focuses on detecting and eliminating abnormal variations (assignable causes) to achieve consistent quality.
2) SPC requires knowledge of basic statistics, variation, histograms, process capability, and control charts. Control charts are used to monitor a process and detect when assignable causes result in variations outside the natural limits.
3) A histogram provides a visual representation of a process and can indicate if a process is capable and centered on the target, or if assignable causes are present.
When fitting loss data (insurance) to a distribution, often the parameters that provide a good overall fit will understate the density in the tail.
This method allows one to split the distribution into 2 portions, and use a Pareto distribution to fit the tail.
Presented at the CAS Spring Meeting in Seattle, May 2016.
This document discusses various topics related to report writing including findings, conclusions, recommendations, types of reports, report sections, and explores common myths about reports. It provides examples of different sections within reports including an executive summary, company overview, factors for analysis and methodology. The summaries focus on conveying the high-level purpose or content of the different sections while keeping the summary brief.
4. Update on non-invasive prenatal testingPHEScreening
This document discusses screening performance for trisomies 21, 18 and 13 using combined tests and NIPT, as well as expected choice behavior under current and proposed screening pathways. Key points include:
- Combined testing detects around 2.2% of trisomy 21 cases with a false positive rate of 0.4%
- Validation studies show combined testing detects 62-72% of trisomies depending on risk cut-off
- Modeling suggests offering NIPT would result in fewer invasive tests for unaffected pregnancies but more overall testing
- Positive predictive values for NIPT are high, especially for those in higher risk groups
- Changes to NIPT cut-offs could balance detecting more cases with fewer follow
Effects of A Simulated Power Cut in AMS on Milk Yield Valued by Statistics ModelIJERA Editor
A statistics model was developed in order to be able to determine the effects of a simulated power cut of an
Automatic Milking System on the milk output.Measurable and relevant factors, such as power cuts, milk yield,
lactation days, average two days digestion and rumination and time were considered in the calculation tool.
Estimating sample size through simulationsArthur8898
Determining sample size is one critical and important procedure for designing an experiment. The sample size for most statistical models can be easily calculated by using the POWER procedure. However, the PROC POWER cannot be used for a complicated statistical model. This paper reviews a more generalized method to estimate the sample size through a simulation approach by using SAS® software. The simulation approach not only applies to the simple but also to a more complex statistical design.
This document provides an overview of statistical process control (SPC) concepts including control charts, process capability, and applying SPC to services. It discusses control charts for attributes like p-charts and c-charts and control charts for variables like x-bar charts and R-charts. It also covers determining control limits, identifying patterns in control charts, and using Excel for SPC.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Video 1B Handout_2023.pptx
1. AIHA VIDEO SERIES:
MAKING ACCURATE EXPOSURE RISK
DECISIONS
Video 1B
Rules of Thumb for Interpreting
Exposure Monitoring Data
1
2. Disclaimer &
Copyright
Although the information contained in this session has been compiled from sources believed to be reliable, the presenter and AIHA®
make no guarantee as to, and assumes no responsibility for, the correctness, sufficiency, or completeness of such information.
Since standards and codes vary from one place to another, consult with your local Occupational or Environmental Health and Safety
professional to determine the current state of the art before applying what you learn from this webinar.
AIHA must ensure balance, independence, objectivity, and scientific rigor in its educational events. Instructors are expected to
disclose any significant financial interests or other relationships. The intent of this disclosure is not to prevent an instructor from
presenting, but to provide participants with information to base their own judgments. It remains up to the participant to determine
whether an instructor’s interests or relationships may influence the presentation.
Session presentation material belongs to the presenter with usage rights given to AIHA. This session and associated materials can
be reproduced, rebroadcast, or made into derivative works without express written permission.
3. Disclaimer &
Copyright
Handout Information
All AIHA University session handouts are produced by AIHA as submitted by the instructors, and in the instructor determined order.
AIHA does not change or modify handout content; we may adjust images to improve layout and clarity.
5. CLICK TO EDIT MASTER TITLE
STYLE
5
QUICK REVIEW . . .
6. Effective and Efficient
Exposure Risk Management
Effective:
Ensure that no worker has
unacceptable exposures
Efficient:
Do it for minimum cost
6
7. 7
7
What if our exposure assessment is wrong?
If we underestimate the exposure?
• Increased risk to employees
If we overestimate the exposure?
• Unnecessary constraints for employees
and production
• Unnecessary expenditures for controls
7
Well-Designed Exposure Risk
Management Strategy
We Want: We Don’t Want:
Good Data Bad Data
To Be Effective To Not Be Protective
To Be Efficient To Waste Resources
No Biases Biases (High or Low)
Low Uncertainty High Uncertainty
Correct Decisions Wrong Decisions
8. Decision Statistic:
“Strive for at least 95% confidence that the
true 95th percentile is less than the OEL”
OEL
95%ile
95%ile UCL
0 1 2 3 4
-2
0
2
4
6
8
10
0 0.2 0.4 0.6 0.8 1 1.2
Best Estimate
95% Upper Confidence
Best Estimate
Exposure Profile
95% Upper Confidence
Exposure Profile
8
9. AIHA Exposure Rating and Control Categories
Increase Effectiveness and Efficiency
• Avoid diminishing returns from
“over-refining” exposure estimates
• Streamline Documentation
• Facilitate Qualitative Exposure
Judgements
• Drive consistent follow-up
management and control activities
which lead to consistent risk
management.
9
10. ?
Exposure Risk Decisions:
How Accurate Are We?
** Decision statistic = 95th percentile
Sample Results
(ppm)
18
15
5
8
12
When We Have Monitoring Data . . .
10
11. Judgement Accuracy is Poor If We Don’t Use
Statistical Tools When We Have Monitoring Data
12. Lack of Familiarity with Properties of the
Upper Tail of Lognormal Distributions
95%ile
0
1
2
3
4
5
6
7
8
9
0 0.1 0.2 0.3 0.4 0.5 0.6
13. Lack of Familiarity with Properties of the
Upper Tail of Lognormal Distributions
95%ile
0
1
2
3
4
5
6
7
8
9
0 0.1 0.2 0.3 0.4 0.5 0.6
• Skewed to high end
• Unlikely to have result in
upper 5%ile when
number of samples is low
15. Rules-of-Thumb to Aid Data Interpretation
Given:
• GM = median
• X0.95=GM x GSD1.645
… Rules-of-thumb, or
guidelines, can be devised
for quickly estimating
from limited data the
range in which the true
95th percentile might lie.
16. Rules-of-Thumb to Aid Data Interpretation
Given:
• GM = median
• X0.95=GM x GSD1.645
… Rules-of-thumb, or
guidelines, can be devised
for quickly estimating
from limited data the
range in which the true
95th percentile might lie.
GSD
Multiple of GM (median) to
Calculate 95%ile
1.5 1.95
2.0 3.13
2.5 4.51
3.0 6.09
16
17. Rules-of-Thumb to Aid Data Interpretation
Given:
• GM = median
• X0.95=GM x GSD1.645
… Rules-of-thumb, or
guidelines, can be devised
for quickly estimating
from limited data the
range in which the true
95th percentile might lie.
GSD
Multiple of GM (median) to
Calculate 95%ile
1.5 1.95
2.0 3.13
2.5 4.51
3.0 6.09
17
4
6
2
Low
High
Variability
18. Rules of Thumb to Aid Data Interpretation
• For Low n: If any measurement > OEL then Category 4: 95%ile > OEL
• Determine Median of the Data
• Calculate and compare to OEL:
2 x Median
4 x Median
6 x Median
• Emphasis on 2 x Median if
data have little spread
• Emphasis on 6 x Median if
data have large spread
Note: A lower category is not an option if
any measurements are in a higher category.
Variability ROT
Multiplier
Low 2
Medium 4
High 6
18
24. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63 15 30 60 90
Approximate X0.95
24
One measurement > 50%OEL (in Category 3). This
eliminates Categories 0, 1, and 2.
25. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63 15 30 60 90 3
Approximate X0.95
25
One measurement > 50%OEL (in Category 3). This
eliminates Categories 0, 1, and 2.
35. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63 15 30 60 90 3
B 6 6 12 24 36 2
C 5, 8, 9, 33, 37, 109
Approximate X0.95
35
One measurement > OEL so the ROT rating is Category 4.
36. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63 15 30 60 90 3
B 6 6 12 24 36 2
C 5, 8, 9, 33, 37, 109 4
Approximate X0.95
36
One measurement > OEL so the ROT rating is Category 4.
37. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63 15 30 60 90 3
B 6 6 12 24 36 2
C 5, 8, 9, 33, 37, 109 21 42 84 126 4
Approximate X0.95
37
One measurement > OEL so the ROT rating is Category 4.
40. Rules-of-Thumb Examples
Data
Set
Data (ppm)
(OEL=100) Median 2x 4x 6x
Exposure
Category
(1-4)
A 5, 7, 13, 17, 30, 63* 15 30 60 90 3
B 6 6 12 24 36 2
C 5, 8, 9, 33, 37, 109** 21 42 84 126 4
D 3, 5, 12†, 20† 8.5 17 34 51 2
E 78 78 156 312 468 4
F 1, 3 2 4 8 12 1
G 17†, 18†, 31†, 45† 24.5 49 98 147 3
H 4, 5, 6, 12†, 14†, 36† 9 18 36 54 2
Approximate X0.95
40
* Dataset A: one measurement > 50%OEL. This eliminates Categories 0, 1, and 2.
**Dataset C: one measurement > OEL so the ROT rating is Category 4. (The sample 95th percentile is 131.)
† Datasets D, G, and H has measurements > 10%OEL. This eliminates Categories 0 and 1.
41. Rules of Thumb Comments
• The Rules of Thumb help us understand the implications of the upper
percentile decision statistic as opposed to some measure of central tendency.
• The Rules of Thumb counter our human world view which tends to think
symmetrically and underestimate the extent to which the 95%ile of a
lognormal distribution is likely to exceed the median sample result, even at
relatively low variability.
• The Rules of Thumb can be applied to censored datasets – i.e., datasets
containing non-detects – when the NDs are all less than the median.
• While the Rules of Thumb do improve judgments, the ultimate answer to
improved accuracy is to use statistical tools when we have monitoring data.
41
42. Use Statistical Tools!!
42
95%ile = 1.2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0 0.5 1.0 1.5 2.0
Concentration (mg/M3)
UTL95%,95% =
16 mg/M3
AIHA
IHSTAT
™
Industrial Hygiene Statistics Beta 0.9 - For trial and testing only - Please do not distribute
Data Description: John Mulhausen
OEL DESCRIPTIVE STATISTICS
5 Number of Samples (n) 15
Maximum (max) 5.5
Sample Data Minimum (min) 1.2
(max n=50) Range 4.3
No less-than (<) Percent above OEL (%>OEL) 6.667
or greater-than (>) Mean 2.680
1.3 Median 2.500
1.8 Standard Deviation (s) 1.138
1.2 Mean of Log (LN) Transformed Data 0.908
4.5 Std Deviation of Log (LN) Transformed Data 0.407
2 Geometric Mean (GM) 2.479
2.1 Geometric Standard Deviation (GSD) 1.502
5.5
2.2 TEST FOR DISTRIBUTION FIT
3 W Test of Log (LN) Transformed Data 0.974
2.4 Lognormal (a=0.05)? Yes
2.5
2.5 W Test of Data 0.904
3.5 Normal (a=0.05)? Yes
2.8
2.9 LOGNORMAL PARAMETRIC STATISTICS
Estimated Arithmetic Mean - MVUE 2.677
1,95%LCL - Land's "Exact" 2.257
1,95%UCL - Land's "Exact" 3.327
95th Percentile 4.843
Upper Tolerance Limit (95%, 95%) 7.046
Percent Exceeding OEL (% > OEL) 4.241
1,95% LCL % > OEL 0.855
1,95% UCL % > OEL 15.271
NORMAL PARAMETRIC STATISTICS
Mean 2.680
1,95%LCL - t stats 2.162
1,95%UCL- t stats 3.198
95th Percentile - Z 4.553
Upper Tolerance Limit (95%, 95%) 5.60
Percent Exceeding OEL (% > OEL) 2.078
Linear Probability Plot and Least Squares
Best Fit Line
1%
2%
5%
10%
16%
25%
50%
75%
84%
90%
95%
98%
99%
-5 0 5 10
Concentration
Log-Probability Plot and Least Squares Best Fit Line
1%
2%
5%
10%
16%
25%
50%
75%
84%
90%
95%
98%
99%
0 1 10
Concentration
Idealized Lognormal Distribution
AM and CI's 95%ile
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0 1 2 3 4 5 6 7
Concentration
Sequential Data Plot
0
1
2
3
4
5
6
0 2 4 6 8 10 12 14 16
Sample Number
Concentration
IH
Data
Analyst
Exposure Rating Category
<1%OEL <10% OEL 10 – 50% 50 – 100% >100% OEL
Probability
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0
0.087
0.4
0.513
OEL
Likelihood that
95%ile falls into
indicated
Exposure Rating
Category
Initial
Qualitative
Assessment
or Validated
Model
Prior
Exposure Rating
0 1 2 3 4
Decision
Probability
1
0.8
0.6
0.4
0.2
0
0.05
0.2
0.5
0.2
0.05
Monitoring
Results
Likelihood
Exposure Rating
0 1 2 3 4
Decision
Probability
1
0.8
0.6
0.4
0.2
0
0 0 0.06
0.376
0.564
Integrated
Exposure
Assessment
Posterior
Exposure Rating
0 1 2 3 4
Decision
Probability
1
0.8
0.6
0.4
0.2
0
0 0
0.225
0.564
0.211
Expostats
™
Traditional Statistics Bayesian Statistics
42
44. Free IH Statistical Analysis Tools
• AIHA IHSTAT - Excel application that calculates various exposure
statistics, performs goodness of fit tests, and graphs exposure data.
https://www.aiha.org/public-resources/consumer-resources/topics-of-interest/ih-apps-tools
• Bayesian Decision Analysis – IH Data Analyst Student Edition
Computer Tool
https://www.easinc.co/
• Expostats Bayesian IH Data Analyst Tool
http://expostats.ca/site/en/index.html
44
FREE
TOOLS!!
45. Free Exposure Assessment Tools
• IH/OEHS Exposure Scenario Tool (IHEST)™
Excel tool to aid Basic Characterization
• IHSkinPerm™
Excel tool for estimating dermal absorption.
• Basic Exposure Assessment and Sampling Spreadsheet™
Excel template for entering EA/BC and sampling data
• The Qualitative Exposure Assessment Checklist (The Checklist)™
• IHMOD 2.0™
Excel-based mathematical modeling spreadsheet
https://www.aiha.org/public-resources/consumer-resources/topics-of-interest/ih-apps-tools
45
MORE
FREE
TOOLS!!
46. Exposure Decision Analysis:
Competency Assessment
Exposure Decision Criteria
• Allowable Exceedance
• Needed Confidence
• Use of Exposure Categories
Traditional Industrial Hygiene Stats
• Properties of a lognormal distribution
• Upper percentile estimate calculation & interpretation
• Tolerance Limit calculation & interpretation
Bayesian Decision Analysis (BDA)
• Properties of a lognormal distribution
• Upper percentile estimate calculation & interpretation
• Tolerance Limit calculation & interpretation
Data and Similar Exposure Groups (SEGs)
• Rules for combining data
• Indications that a SEG may need refining
Decision Heuristics and Human Biases
• Common sources of bias in data interpretation and
exposure assessment
• How to avoid bias in data interpretation
Exposure Data Interpretation
• Most likely exposure category given data
• Meet the certainty requirement given data
Techniques for Improving Professional Judgments
• Feedback loops (quantitative judgment > monitoring >
qualitative judgment)
• Group judgment sessions
• Documentation of rationale
• Break decisions into aggregate parts (Modeling)
46
47. Learn More:
aiha.org | 47
• Papers:
• Logan P., G. Ramachandran, J. Mulhausen, S. Banerjee, and P. Hewett “Desktop Study of
Occupational Exposure Judgments: Do Education and Experience Influence Accuracy?” Journal of
Occupational and Environmental Hygiene, 8:12, 746-758, 2011.
• Logan P., G. Ramachandran, J. Mulhausen, and P. Hewett:” Occupational Exposure Decisions: Can
Limited Data Interpretation Training Help Improve Accuracy?” Annals of Occupational Hygiene, Vol.
53, No. 4, pp. 311–324, 2009.
• Vadali, M. G. Ramachandran, J. Mulhausen, S. Banerjee, "Effect of Training on Exposure Judgment
Accuracy of Industrial Hygienists”. Journal of Occupational & Environmental Hygiene. 9: 242–256,
2012.
• Arnold S., M. Stenzel, D. Drolet, G. Ramachandran; Journal of Occupational and Environmental
Hygiene, 13, 159-168, 2016
• Hewett, P., Logan, P., Mulhausen, J., Ramachandran, G., and Banerjee, S.: “Rating Exposure Control
using Bayesian Decision Analysis”, Journal of Occupational and Environmental Hygiene, 3: 568–
581, 2006
• Jérôme Lavoué, Lawrence Joseph, Peter Knott, Hugh Davies, France Labrèche, Frédéric Clerc,
Gautier Mater, Tracy Kirkham, “Expostats: A Bayesian Toolkit to Aid the Interpretation of
Occupational Exposure Measurements”, Annals of Work Exposures and Health, Volume 63, Issue 3,
April 2019, Pages 267–279
48. 48
• Books:
• A Strategy for Assessing and Managing Occupational Exposures. 4th Ed.
AIHA Press. 2015.
• Opinion:
• Mulhausen, J. “Faulty Judgment” President’s Message. The Synergist.
(November 2021).
• Mulhausen, J. “How to Improve Exposure Judgments” President’s
Message. The Synergist. (December 2021).
• Videos:
• Mulhausen, J. “Top 10 Imperatives for the AIHA Exposure Risk
Management Process.”
Free from AIHA at https://online-
ams.aiha.org/amsssa/ecssashop.show_product_detail?p_mode=detail&p
_product_serno=2650&p_cust_id=&p_order_serno=&p_promo_cd=&p_p
rice_cd=&p_category_id=&p_session_serno=72069269&p_trans_ty=
Learn More:
49. AIHA VIDEO SERIES:
MAKING ACCURATE EXPOSURE RISK DECISIONS
Video 1A: Exposure Variability and the Importance of Using Statistics to
Improve Judgements
Video 1B: Rules of Thumb for Interpreting Exposure Monitoring Data
Video 2: Introduction to Bayesian Statistical Approaches and Their
Advantages
Video 3A: Free Bayesian Statistical Tools: IHDA Student Edition
Video 3B: Free Bayesian Statistical Tools: Expostats
Video 4: Implementing AIHA Strategy Using Statistical Tools: Examples
49
Join us for the next video in the series . . .
50. AIHA VIDEO SERIES:
MAKING ACCURATE EXPOSURE RISK
DECISIONS
Video 1B
Rules of Thumb for Interpreting
Exposure Monitoring Data
50
Editor's Notes
Please refer to your handout for the next 2 slides, containing information on AIHA’s Disclaimer and Copyright information.
[next]