Approaches to gather business requirements, defining problem statements, business requirements for
use case development, Assets for development of IoT solutions
This document discusses three main methods for solving operational research (OR) models: analytical methods, iterative methods, and the Monte-Carlo method. Analytical methods use tools like calculus and graphs to find closed-form solutions. Iterative methods are used when analytical methods are too complex; they start with a trial solution and iteratively improve it until optimal. The Monte-Carlo method experiments on a model by inserting random variable values and observing their effects on the criterion over time.
This document discusses different types of simulation models. It describes:
1) Static vs dynamic models, with dynamic models changing over time and static models as snapshots.
2) Deterministic vs stochastic vs chaotic models, depending on how predictable the behavior is.
3) Discrete vs continuous models, with discrete changing at countable points and continuous changing continuously.
4) Aggregate vs individual models, with aggregate models taking a more distant view and individual models a closer view of decisions.
Operational research (OR) is a discipline that deals with applying advanced analytical methods to help make better decisions. OR uses scientific methods and especially mathematical modeling to study complex problems. It is considered a subfield of applied mathematics. Some key applications of OR include scheduling, facility planning, planning and forecasting, credit scoring, marketing, and defense planning. OR takes a systems approach, uses interdisciplinary teams, and aims to optimize objectives subject to constraints through quantitative modeling and analysis.
The document discusses partial least squares structural equation modeling (PLS-SEM). It provides an overview of key concepts in PLS-SEM, including the differences between PLS-SEM and covariance-based SEM, the objectives and assumptions of each method, and guidelines for when each method is most appropriate. The document also outlines the stages of applying PLS-SEM, including specifying measurement and structural models, model estimation, and evaluating results. Examples are provided to illustrate reflective versus formative measurement models.
CB-SEM assumes normally distributed data which is rarely the case in social sciences research, while PLS-SEM is non-parametric and works well with non-normal distributions. The example showed that CB-SEM resulted in losing many indicator variables to achieve good model fit, whereas PLS-SEM retained more indicators to support both measuring and developing the structural theory. PLS-SEM is preferable when data is non-normal, though CB-SEM can work if the theory and measurement are well established.
Classification of mathematical modeling,
Classification based on Variation of Independent Variables,
Static Model,
Dynamic Model,
Rigid or Deterministic Models,
Stochastic or Probabilistic Models,
Comparison Between Rigid and Stochastic Models
1. The document discusses simulation as a technique used to study and analyze the behavior of actual or theoretical systems by creating computer-based models. It is used when directly studying real systems is not possible or practical.
2. Simulation models can be static or dynamic, discrete or continuous, and deterministic or stochastic. They are composed of mathematical and logical relationships that are analyzed using numerical rather than analytical methods.
3. Simulation has many applications including manufacturing and materials handling systems. It allows testing designs and systems virtually before implementing them in the real world. It provides insights into how systems work and which variables most impact performance.
This document discusses three main methods for solving operational research (OR) models: analytical methods, iterative methods, and the Monte-Carlo method. Analytical methods use tools like calculus and graphs to find closed-form solutions. Iterative methods are used when analytical methods are too complex; they start with a trial solution and iteratively improve it until optimal. The Monte-Carlo method experiments on a model by inserting random variable values and observing their effects on the criterion over time.
This document discusses different types of simulation models. It describes:
1) Static vs dynamic models, with dynamic models changing over time and static models as snapshots.
2) Deterministic vs stochastic vs chaotic models, depending on how predictable the behavior is.
3) Discrete vs continuous models, with discrete changing at countable points and continuous changing continuously.
4) Aggregate vs individual models, with aggregate models taking a more distant view and individual models a closer view of decisions.
Operational research (OR) is a discipline that deals with applying advanced analytical methods to help make better decisions. OR uses scientific methods and especially mathematical modeling to study complex problems. It is considered a subfield of applied mathematics. Some key applications of OR include scheduling, facility planning, planning and forecasting, credit scoring, marketing, and defense planning. OR takes a systems approach, uses interdisciplinary teams, and aims to optimize objectives subject to constraints through quantitative modeling and analysis.
The document discusses partial least squares structural equation modeling (PLS-SEM). It provides an overview of key concepts in PLS-SEM, including the differences between PLS-SEM and covariance-based SEM, the objectives and assumptions of each method, and guidelines for when each method is most appropriate. The document also outlines the stages of applying PLS-SEM, including specifying measurement and structural models, model estimation, and evaluating results. Examples are provided to illustrate reflective versus formative measurement models.
CB-SEM assumes normally distributed data which is rarely the case in social sciences research, while PLS-SEM is non-parametric and works well with non-normal distributions. The example showed that CB-SEM resulted in losing many indicator variables to achieve good model fit, whereas PLS-SEM retained more indicators to support both measuring and developing the structural theory. PLS-SEM is preferable when data is non-normal, though CB-SEM can work if the theory and measurement are well established.
Classification of mathematical modeling,
Classification based on Variation of Independent Variables,
Static Model,
Dynamic Model,
Rigid or Deterministic Models,
Stochastic or Probabilistic Models,
Comparison Between Rigid and Stochastic Models
1. The document discusses simulation as a technique used to study and analyze the behavior of actual or theoretical systems by creating computer-based models. It is used when directly studying real systems is not possible or practical.
2. Simulation models can be static or dynamic, discrete or continuous, and deterministic or stochastic. They are composed of mathematical and logical relationships that are analyzed using numerical rather than analytical methods.
3. Simulation has many applications including manufacturing and materials handling systems. It allows testing designs and systems virtually before implementing them in the real world. It provides insights into how systems work and which variables most impact performance.
1. The document discusses simulation as a technique used to study and analyze the behavior of systems over time. Simulation involves creating a computer-based model of a real-world system to draw conclusions about how it operates.
2. Simulation can be used for task training, decision-making, scientific research, and predicting the behavior of natural systems. It allows testing alternatives without committing resources.
3. The document provides examples of how simulation can be used to model the operations of cooperative societies and banks to help students better understand commercial mathematics topics.
This document provides an introduction to modeling and simulation. It discusses the goals of modeling, different types of models, and an overview of the simulation process. The key steps in simulation include defining an achievable goal, ensuring appropriate skills and involvement from end users, choosing simulation tools, validating the model, and analyzing statistical output. Pitfalls to avoid include lack of clear objectives, inappropriate model detail, and failure to validate models or account for randomness.
This document discusses input modeling for simulation and outlines 4 steps:
1) Collect data from the real system or use expert opinion if data is unavailable
2) Identify a probability distribution to represent the input process
3) Choose parameters for the distribution family by estimating from the data
4) Evaluate the chosen distribution through goodness of fit tests or create an empirical distribution if none is found
This document presents information on sensitivity analysis techniques. It discusses how sensitivity analysis is used to determine how changes to independent variables impact dependent variables given certain assumptions. It also describes how sensitivity analysis can predict outcomes if a situation differs from key predictions. Various sensitivity analysis methods are outlined, including correlation and screening techniques, regression analysis, and analyzing oscillations through measuring behavior patterns like period and amplitude. An example of applying sensitivity analysis to a simple supply chain model is also provided.
This document discusses systems analysis and simulation. It defines a system as a collection of elements that work together to achieve a goal. There are two main types of systems: discrete systems where state variables change at separate points in time, and continuous systems where state variables change continuously over time. A model represents a system in order to study it, as experimenting directly with the real system may not be possible or wise. Simulation models can be static or dynamic, deterministic or stochastic, discrete or continuous. Discrete-event simulation specifically models systems as they progress through time as a series of instantaneous events.
This document provides an overview of a project report on simulating a single server queuing problem. The report includes an introduction to operations research, simulation, and the queuing problem. It discusses the research methodology, which involves defining the problem, developing a simulation model, validating the model, analyzing the data, and presenting findings and recommendations. The goal is to use simulation to provide optimal solutions to the queuing problem under study.
Modeling, analysis, and control of dynamic systemsJACKSON SIMOES
This document is the preface to the second edition of the textbook "Modeling, Analysis, and Control of Dynamic Systems" by William J. Palm III. It discusses the structure and content of the textbook, which provides an introduction to modeling, analysis, and control of dynamic systems. The textbook covers both classical and modern approaches to systems and control theory and includes examples from various engineering domains. It also introduces digital analysis and control without using the z-transform.
This document provides an overview of various operations research (OR) models, including: linear programming, network flow programming, integer programming, nonlinear programming, dynamic programming, stochastic programming, combinatorial optimization, stochastic processes, discrete time Markov chains, continuous time Markov chains, queuing, and simulation. It describes the basic components and applications of each model type at a high level.
This document provides an introduction to system dynamics. It defines a system as a collection of interacting components with defined boundaries and inputs/outputs. Dynamic systems change over time even if inputs are constant, while static systems only depend on current inputs. Common dynamic systems include mechanical, electrical, thermal, and fluid systems. System dynamics involves defining a system, creating a mathematical model, simulating the model's behavior, and making recommendations. Models allow studying systems without experimenting on real systems. Simulation uses models to compute how systems react to inputs over time.
Operations research (OR) is an interdisciplinary approach for decision-making that uses mathematical modeling and analytical methods to arrive at optimal or near-optimal solutions to complex decision problems. OR was first applied during World War II to solve logistics and operations problems. It involves breaking problems down into components, representing them mathematically, and using analytical methods like linear programming to solve problems. The goal of OR is to determine the best solution to a problem by quantifying variables and using mathematical techniques and computer modeling.
This document outlines a simulation study conducted by Nora ALHarbi and Enaam ALOtaibi on blood donation drives. It includes an introduction to simulation, definitions, types of simulation, and the simulation process. It then discusses how the Red Cross used simulation to analyze their blood donation process and identify policies to reduce donor wait times. Alternative arrival patterns and policy options like increasing beds were tested. The simulation analysis improved performance and donor satisfaction at Red Cross blood drives.
This document discusses correlation, trend analysis, and different correlation procedures. It defines correlation as a statistical relationship between dependent variables. Bivariate correlations measure the relationship between two variables, while partial correlations control for additional variables. Distances calculate similarity or dissimilarity statistics between variables or cases. Trend analysis describes historical patterns and allows projections of past or future trends. It can extract underlying patterns in time series data hidden by noise. Regression analysis is commonly used to analyze trends between a continuous independent variable, like weekly reading hours, and a continuous dependent variable, like reading achievement scores.
This document provides an overview of structural equation modeling (SEM) using AMOS. It defines key SEM concepts like latent variables, observed variables, path analysis, and model identification. It also explains how to specify and estimate a SEM model in AMOS, including how to draw path diagrams, name variables, set regression weights, and view output. Model fit is discussed along with potential issues like sample size. Confirmatory factor analysis and other SEM models like path analysis and latent growth models are also introduced.
Factor analysis in marketing research intentions to designate a large number of variables or questions by using a reduced set of underlying variables, called factors. Factor analysis is unsurpassed when cast-off to simplify complex data sets with many variables
The document discusses simulation as a technique for modeling real-world systems with uncertain inputs. It defines simulation as using models to represent systems over time to understand their behavior. The key aspects covered include:
- Components of a simulation model including inputs, calculations, and outputs
- Types of simulation like time-dependent vs time-independent and corporate/financial simulations
- Major applications in queuing systems and analyzing waiting times
- Steps of the simulation process from identifying the problem to evaluating results
- Components and structures of queuing systems like arrivals, queues, service, and departure.
The information in this slide is very useful for me to do the assignment regarding the simulation in which we have to report together with the presentation...
The technique used to determine how independent variable values will impact a particular dependent variable under a given set of assumptions is defined as sensitive analysis. It’s usage will depend on one or more input variables within the specific boundaries, such as the effect that changes in interest rates will have on a bond’s price.
Machine learning is a subfield of artificial intelligence concerned with algorithms that allow computers to improve performance over time based on data. There are three main types of machine learning: supervised learning which uses labeled training data to predict outputs for new inputs, unsupervised learning which looks for patterns in unlabeled data, and reinforcement learning which models reinforcement situations to increase the probability of favorable responses. Computational learning theory analyzes machine learning algorithms and includes approaches like probably approximately correct learning, Vapnik-Chervonenkis theory, Bayesian inference, and online machine learning. Pattern recognition takes raw data and categorizes it into patterns using algorithms like neural networks, representing a preprocessing step for supervised learning applications like computer-aided diagnosis.
The document discusses different types of mathematical models, including deterministic and probabilistic models. It provides examples of each. It also discusses building, verifying, and refining mathematical models. Additionally, it covers optimization models, their components including objective functions and constraints. Finally, it discusses specific types of optimization models like linear programming, network flow programming, and integer programming.
Simulation and Modelling Reading Notes.pptxDanMuendo1
This document discusses simulation and modeling. It defines simulation as imitating real-life situations using computer models. Models represent systems using mathematical relationships. Simulation allows experimenting with models to understand system behavior under different conditions without changing the real system. The document outlines the modeling and simulation process and provides examples of applications in areas like business planning, drug development, and traffic analysis.
Simulation involves developing a model of a real-world system over time to analyze its behavior and performance. The key aspects covered in this document include defining simulation as modeling the operation of a system over time through artificial history generation and observation. Simulation models can be used as analysis and design tools to predict the effects of changes to a system before actual implementation. Discrete event simulation is discussed as a common technique that models systems with state changes occurring at discrete points in time. The document also outlines the steps in a typical simulation study including problem formulation, model conceptualization, experimentation and analysis.
1. The document discusses simulation as a technique used to study and analyze the behavior of systems over time. Simulation involves creating a computer-based model of a real-world system to draw conclusions about how it operates.
2. Simulation can be used for task training, decision-making, scientific research, and predicting the behavior of natural systems. It allows testing alternatives without committing resources.
3. The document provides examples of how simulation can be used to model the operations of cooperative societies and banks to help students better understand commercial mathematics topics.
This document provides an introduction to modeling and simulation. It discusses the goals of modeling, different types of models, and an overview of the simulation process. The key steps in simulation include defining an achievable goal, ensuring appropriate skills and involvement from end users, choosing simulation tools, validating the model, and analyzing statistical output. Pitfalls to avoid include lack of clear objectives, inappropriate model detail, and failure to validate models or account for randomness.
This document discusses input modeling for simulation and outlines 4 steps:
1) Collect data from the real system or use expert opinion if data is unavailable
2) Identify a probability distribution to represent the input process
3) Choose parameters for the distribution family by estimating from the data
4) Evaluate the chosen distribution through goodness of fit tests or create an empirical distribution if none is found
This document presents information on sensitivity analysis techniques. It discusses how sensitivity analysis is used to determine how changes to independent variables impact dependent variables given certain assumptions. It also describes how sensitivity analysis can predict outcomes if a situation differs from key predictions. Various sensitivity analysis methods are outlined, including correlation and screening techniques, regression analysis, and analyzing oscillations through measuring behavior patterns like period and amplitude. An example of applying sensitivity analysis to a simple supply chain model is also provided.
This document discusses systems analysis and simulation. It defines a system as a collection of elements that work together to achieve a goal. There are two main types of systems: discrete systems where state variables change at separate points in time, and continuous systems where state variables change continuously over time. A model represents a system in order to study it, as experimenting directly with the real system may not be possible or wise. Simulation models can be static or dynamic, deterministic or stochastic, discrete or continuous. Discrete-event simulation specifically models systems as they progress through time as a series of instantaneous events.
This document provides an overview of a project report on simulating a single server queuing problem. The report includes an introduction to operations research, simulation, and the queuing problem. It discusses the research methodology, which involves defining the problem, developing a simulation model, validating the model, analyzing the data, and presenting findings and recommendations. The goal is to use simulation to provide optimal solutions to the queuing problem under study.
Modeling, analysis, and control of dynamic systemsJACKSON SIMOES
This document is the preface to the second edition of the textbook "Modeling, Analysis, and Control of Dynamic Systems" by William J. Palm III. It discusses the structure and content of the textbook, which provides an introduction to modeling, analysis, and control of dynamic systems. The textbook covers both classical and modern approaches to systems and control theory and includes examples from various engineering domains. It also introduces digital analysis and control without using the z-transform.
This document provides an overview of various operations research (OR) models, including: linear programming, network flow programming, integer programming, nonlinear programming, dynamic programming, stochastic programming, combinatorial optimization, stochastic processes, discrete time Markov chains, continuous time Markov chains, queuing, and simulation. It describes the basic components and applications of each model type at a high level.
This document provides an introduction to system dynamics. It defines a system as a collection of interacting components with defined boundaries and inputs/outputs. Dynamic systems change over time even if inputs are constant, while static systems only depend on current inputs. Common dynamic systems include mechanical, electrical, thermal, and fluid systems. System dynamics involves defining a system, creating a mathematical model, simulating the model's behavior, and making recommendations. Models allow studying systems without experimenting on real systems. Simulation uses models to compute how systems react to inputs over time.
Operations research (OR) is an interdisciplinary approach for decision-making that uses mathematical modeling and analytical methods to arrive at optimal or near-optimal solutions to complex decision problems. OR was first applied during World War II to solve logistics and operations problems. It involves breaking problems down into components, representing them mathematically, and using analytical methods like linear programming to solve problems. The goal of OR is to determine the best solution to a problem by quantifying variables and using mathematical techniques and computer modeling.
This document outlines a simulation study conducted by Nora ALHarbi and Enaam ALOtaibi on blood donation drives. It includes an introduction to simulation, definitions, types of simulation, and the simulation process. It then discusses how the Red Cross used simulation to analyze their blood donation process and identify policies to reduce donor wait times. Alternative arrival patterns and policy options like increasing beds were tested. The simulation analysis improved performance and donor satisfaction at Red Cross blood drives.
This document discusses correlation, trend analysis, and different correlation procedures. It defines correlation as a statistical relationship between dependent variables. Bivariate correlations measure the relationship between two variables, while partial correlations control for additional variables. Distances calculate similarity or dissimilarity statistics between variables or cases. Trend analysis describes historical patterns and allows projections of past or future trends. It can extract underlying patterns in time series data hidden by noise. Regression analysis is commonly used to analyze trends between a continuous independent variable, like weekly reading hours, and a continuous dependent variable, like reading achievement scores.
This document provides an overview of structural equation modeling (SEM) using AMOS. It defines key SEM concepts like latent variables, observed variables, path analysis, and model identification. It also explains how to specify and estimate a SEM model in AMOS, including how to draw path diagrams, name variables, set regression weights, and view output. Model fit is discussed along with potential issues like sample size. Confirmatory factor analysis and other SEM models like path analysis and latent growth models are also introduced.
Factor analysis in marketing research intentions to designate a large number of variables or questions by using a reduced set of underlying variables, called factors. Factor analysis is unsurpassed when cast-off to simplify complex data sets with many variables
The document discusses simulation as a technique for modeling real-world systems with uncertain inputs. It defines simulation as using models to represent systems over time to understand their behavior. The key aspects covered include:
- Components of a simulation model including inputs, calculations, and outputs
- Types of simulation like time-dependent vs time-independent and corporate/financial simulations
- Major applications in queuing systems and analyzing waiting times
- Steps of the simulation process from identifying the problem to evaluating results
- Components and structures of queuing systems like arrivals, queues, service, and departure.
The information in this slide is very useful for me to do the assignment regarding the simulation in which we have to report together with the presentation...
The technique used to determine how independent variable values will impact a particular dependent variable under a given set of assumptions is defined as sensitive analysis. It’s usage will depend on one or more input variables within the specific boundaries, such as the effect that changes in interest rates will have on a bond’s price.
Machine learning is a subfield of artificial intelligence concerned with algorithms that allow computers to improve performance over time based on data. There are three main types of machine learning: supervised learning which uses labeled training data to predict outputs for new inputs, unsupervised learning which looks for patterns in unlabeled data, and reinforcement learning which models reinforcement situations to increase the probability of favorable responses. Computational learning theory analyzes machine learning algorithms and includes approaches like probably approximately correct learning, Vapnik-Chervonenkis theory, Bayesian inference, and online machine learning. Pattern recognition takes raw data and categorizes it into patterns using algorithms like neural networks, representing a preprocessing step for supervised learning applications like computer-aided diagnosis.
The document discusses different types of mathematical models, including deterministic and probabilistic models. It provides examples of each. It also discusses building, verifying, and refining mathematical models. Additionally, it covers optimization models, their components including objective functions and constraints. Finally, it discusses specific types of optimization models like linear programming, network flow programming, and integer programming.
Simulation and Modelling Reading Notes.pptxDanMuendo1
This document discusses simulation and modeling. It defines simulation as imitating real-life situations using computer models. Models represent systems using mathematical relationships. Simulation allows experimenting with models to understand system behavior under different conditions without changing the real system. The document outlines the modeling and simulation process and provides examples of applications in areas like business planning, drug development, and traffic analysis.
Simulation involves developing a model of a real-world system over time to analyze its behavior and performance. The key aspects covered in this document include defining simulation as modeling the operation of a system over time through artificial history generation and observation. Simulation models can be used as analysis and design tools to predict the effects of changes to a system before actual implementation. Discrete event simulation is discussed as a common technique that models systems with state changes occurring at discrete points in time. The document also outlines the steps in a typical simulation study including problem formulation, model conceptualization, experimentation and analysis.
System Simulation and Modelling with types and Event SchedulingBootNeck1
System simulation and modelling involves creating models of real-world systems and using those models to simulate and analyze the performance of existing or proposed systems. A system consists of interrelated components that work together towards a common goal. Simulation is the process of using a model to study the performance of a system over time or space. Modelling involves creating a model that represents a system, while simulation operates that model. Simulation can be used across various domains like healthcare, engineering, and military applications. It provides advantages like testing changes without impacting real systems and identifying constraints.
This presentations covers Definition of Operations Research , Models, Scope,Phases ,advantages,limitations, tools and techniques in OR and Characteristics of Operations research
The principles of simulation system design.pptxubaidullah75790
The document discusses the principles of simulation system design and modeling. It outlines the steps to take which include clearly defining the problem scope, collecting accurate data, developing a systematic modeling approach, verifying and validating the model, using the model to answer questions, and communicating the results. It also describes the differences between conceptual models which provide a high-level representation for communication, and abstract models which use formal representations for analysis and simulation. Popular simulation systems and languages are also mentioned.
Systems can be classified in three ways: by complexity, interconnectivity of components, and nature of components. Physical systems have quantifiable variables while conceptual systems do not. Esoteric systems cannot be measured. Systems are independent if components do not affect each other, cascaded if effects are unilateral, or coupled if effects are mutual. Components can be static or dynamic, linear or nonlinear, deterministic or stochastic. There are 12 steps to simulation studies including problem formulation, model building, validation, experimentation, and reporting.
System modeling and simulation involves creating simplified representations of real-world systems to understand and evaluate their behavior over time. A system is composed of interconnected parts designed to achieve specific objectives. A model abstracts and simplifies a system for analysis. Simulation executes a model over time to observe how a system operates. It allows experimenting with systems that may be too expensive, dangerous or complex to study directly. Simulation has many uses including analyzing systems before implementation, optimizing designs, training, and evaluating "what-if" scenarios. Key areas where simulation is applied include manufacturing, business, healthcare, transportation and the military.
Computer simulations and models use mathematical representations to imitate and gain insight into real-world systems. Good models rely on feedback loops between inputs, processes, and outputs. Creating accurate simulations involves gathering data, developing algorithms to generate outputs from inputs, validating results, and addressing complexity and assumptions. Traffic and demographic models help analyze transportation networks and population trends over time. Both have benefits like testing scenarios safely but also challenges regarding data accuracy, access, and reliability over long periods.
Md simulation and stochastic simulationAbdulAhad358
Stochastic simulation involves modeling systems with random variables. It generates random values for insertion into models to understand probable outcomes. Molecular dynamic simulation computationally simulates atom and molecule movements over time based on forces. It provides time-dependent behavior analysis of biological molecules to study structure, dynamics, and thermodynamics without harming environments. Both methods help understand complex systems through numerous replications under varying scenarios.
Modeling and simulation is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model.
This document discusses modeling and simulation. It defines a model as a representation of an object, system, or idea that is different from the actual entity. Models are used to test systems without creating real versions, predict future behavior, train users safely, and investigate systems in detail. The document outlines different types of modeling including physics-based, finite element, data-based, multi-scale, mathematical, and hybrid modeling. It also discusses conceptual modeling and creating block diagrams to represent systems as subsystems and connections. Criteria for separating systems into subsystems include anatomy, function, and measurability of inputs and outputs.
The document discusses modelling and evaluation in machine learning. It defines what models are and how they are selected and trained for predictive and descriptive tasks. Specifically, it covers:
1) Models represent raw data in meaningful patterns and are selected based on the problem and data type, like regression for continuous numeric prediction.
2) Models are trained by assigning parameters to optimize an objective function and evaluate quality. Cross-validation is used to evaluate models.
3) Predictive models predict target values like classification to categorize data or regression for continuous targets. Descriptive models find patterns without targets for tasks like clustering.
4) Model performance can be affected by underfitting if too simple or overfitting if too complex,
This document provides an overview of simulation and modeling. It discusses key concepts such as systems, states, activities, and classification of systems. It also covers the system methodology process including planning, modeling, validation, and application. Examples are provided on simulating a coin toss and daily demand for a grocery store. Advantages and disadvantages of simulation are listed. The document appears to be from a textbook on simulation and modeling and provides foundational information on the topic.
This document discusses simulation as a research method in social science. It provides examples of different types of simulation models used in research:
- System dynamics models examine complex causality and feedback in systems over time.
- NK fitness landscape models study how modular systems adapt to fitness landscapes.
- Genetic algorithms model evolutionary adaptation of populations to optimal forms.
- Cellular automata emerge macro patterns from micro interactions of agents.
- Stochastic processes incorporate probabilistic distributions into systems.
The document outlines how these simulations can help answer research questions and provide insights when direct experiments are impossible or unethical.
1. The document introduces statistics and probability concepts relevant to engineering problems including collecting and analyzing data.
2. Key methods of collecting engineering data are retrospective studies, observational studies, and designed experiments, with advantages and disadvantages of each.
3. Statistical concepts such as populations, samples, variables, and probability are defined and related to engineering applications.
Simulation of complex systems: the case of crowds (Phd course - lesson 1/7)Giuseppe Vizzari
First lesson and introduction of the PhD course on "Computational approaches to Physical and Virtual Crowd Phenomena" - titled "Simulation of complex systems: the case of crowds"
This document provides an overview of Chapter 5 from the book "Management Science: Decision Making Through System Thinking" which discusses system models and diagrams. The chapter covers system models, approaches for describing relevant systems, essential properties of good models, the art of modeling, causal loop diagrams, influence diagrams, and other system diagrams. It emphasizes that system models should be simple, complete, easy to manipulate and communicate, and adaptive in order to gain decision makers' confidence in the model.
Environmental Pollution RecommendationThere is a concern in yo.docxSALU18
Environmental Pollution Recommendation
There is a concern in your community regarding the environment. You've been tasked to research and present the concerns to your local or state government (California)
Perform an internet search to identify an instance of environmental pollution in your state.
Create a 5-to 8-slide PowerPoint® presentation or a 350-to 525-word proposal.
· Identify the effects of this pollution on human health and the environment.
· Explain the causes of this pollution.
· Recommend ways to prevent/clean up this type of environmental pollution.
· Include appropriate images.
Use at least 2 outside references.
Format your presentation and references consistent with APA guidelines.
· For Online and Directed Study students, these are Microsoft® PowerPoint® presentations with notes similar to what you would present orally.
Learning Objectives
After completing this chapter, you should be able to:
• Define a model and describe how models can be used to analyze operating
problems.
• Discuss the nature of forecasting.
• Explain how forecasting can be applied to problems.
• Describe methods of forecasting, including judgment and experience, time-series
analysis, and regression and correlation.
• Construct forecasting models.
• Estimate forecasting errors.
6 .Thinkstock
Models and Forecasting
von70154_06_c06_139-178.indd 139 3/6/13 3:18 PM
CHAPTER 6Section 6.1 Introduction to Models and Decision Making
6.1 Introduction to Models and Decision Making
In order for an organization to design, build, and operate a production facility that is capable of meeting customer demand for services (such as health care) or goods (such as ceiling fans), it is necessary for management to obtain an estimate or forecast of demand
for its products. A forecast is a prediction of the future. It often examines historical data to
determine relationships among key variables in a problem and uses those relationships to
make statements about the future value of one or more of the variables. Once an organiza-
tion has a forecast of demand, it can make decisions regarding the volume of product that
needs to be produced, the number of workers to hire, and other key operating variables.
A model is an abstraction from the real problem of the key variables and relationships in
order to simplify the problem. The purpose of modeling is to provide the user with a bet-
ter understanding of the problem and with a means of manipulating the results for what-
if analyses. Forecasting uses models to help organizations predict important parameters.
Demand is one of those parameters, but cost, revenue, profits, and other variables can also
be forecasted. The purpose of this chapter is to discuss models and describe how they can
be applied to business problems, and to explain forecasting and its role in operations.
Stages in Decision Making
Organizational performance is a result of the decisions that management makes over a
period of time: ...
This document summarizes a research paper that proposes a differentially-fed dual-polarized microstrip patch antenna with bandwidth enhancement. It achieves a wide bandwidth of about 8% while maintaining a low profile of 0.024 free-space wavelengths. This is accomplished by suppressing even-order modes using differential feeding, transforming the radiation pattern of the TM21 mode into broadside radiation with a center slot, and pushing the resonant frequency of the TM01 mode closer to the TM21 mode using shorting pins beneath the patch. The designed antenna was fabricated and tested, and results showed good agreement with simulations in terms of reflection coefficient, radiation pattern, and realized gain.
A novel dual-band, dual-polarized antenna-duplexer scheme is intended to use for
WLAN 802.11a and ISM band applications using Substrate Integrated Waveguide (SIW) Technology. The antenna consists of two planar SIW cavities of different dimensions where a smaller sized
diamond-shaped cavity is inserted inside the larger rectangular cavity to share the common aperture
area. The diamond-ring shaped slots are etched in each cavity for radiation. The larger diamondring slot is excited with a microstrip feedline to operate at 5.2 GHz while the smaller slot is excited
with a coaxial probe to operate at 5.8 GHz. The antenna produces linear polarization at 5.2 GHz
(5.1–5.3 GHz) due to the merging of TE110 and TE120 cavity modes while circular polarization
around 5.8 GHz due to orthogonally excited TM100and TM010modes (5.68–5.95 GHz). The slots
are excited in an orthogonal fashion to maintain a better decoupling between the ports (i.e. –23 dB).
The performance of the antenna has been verified in free space as well as in the vicinity of the
human body. The antenna offers the gain of 6.2 dBi /6.6 dBi in free space and 5.8 dBi / 6.4 dBi
on-body at lower-/ higher frequency-bands, respectively. Also, the specific absorption rate (SAR)
is obtained<0.245 W/Kg for 0.5 W input power averaged over 10 mW/g mass of the tissue. The
proposed design is a low-profile, compact single-layered design, which is a suitable option for
off-body communication.
A good process of software reuse leads to improved reliability, productivity, quality, and reductions in time and cost. Software can be reused at different scales, from complete systems to individual objects and functions. The benefits of reuse include accelerated development, effective use of specialists, increased dependability, lower development costs, reduced project risk, and standards compliance. However, problems with reuse include the costs of creating and maintaining component libraries, difficulties finding and adapting components, potential increased maintenance costs, lack of tool support, and reluctance to reuse outside components. Successful reuse depends on factors like development schedules, software lifetimes, team skills, requirements, application domains, and execution platforms.
Models applied in IoT solutions, Semantic models for data models, Application of semantic models,
information models, information models to structure data, relationships between data categories
Approaches to gather business requirements, defining problem statements, business requirements for
use case development, Assets for development of IoT solutions
This document discusses different types of line codes used for digital communication systems. It begins by introducing line codes as electrical representations of encoded binary data streams produced by transmitters. It then describes several common line coding techniques like unipolar NRZ, polar NRZ, unipolar RZ, bipolar RZ, and Manchester coding. For each technique, it discusses properties like bandwidth requirements, error detection capabilities, and power spectral density. It also provides examples of polar NRZ signals and Manchester encoding. The document aims to analyze and characterize various line coding techniques in both the time and frequency domains.
Differential pulse-code modulation (DPCM) is a lossy data compression technique that removes redundancy in messages like voice or video to reduce the bit rate of transmitted data without serious degradation. It exploits linear prediction theory by performing prediction on quantized message samples in the transmitter instead of the original samples. This differential quantization approach caters to both waveform encoding in the transmitter, which requires quantization, and decoding in the receiver, which must process a quantized signal using the same predictors in both.
This document lists 21 peer-reviewed journal articles, 2 Scopus-indexed journal articles, 11 conference presentations, and 2 peer-reviewed conferences that the author has published in. The publications are in the area of substrate integrated waveguide cavity-backed antennas and arrays, and were published between 2016-2020 in journals like IEEE Transactions on Antennas and Propagation, IET Electronics Letters, and conferences like IEEE Asia-Pacific Microwave Conference.
SUBSTRATE INTEGRATED WAVEGUIDE BASED ANTENNA AND ARRAY ----REVIEWVIT VELLORE
This study aims to provide an overview and
deployment of Substrate-Integrated Waveguide (SIW) based
antenna and arrays, with different configurations, feeding
mechanisms, and performances. Their performance
improvement methods, including bandwidth enhancement, size
reduction, and gain improvement are also discussed based on
available literature. SIW technology, which acts as a bridge
between planar and non-planar technology, is a very favorable
candidate for the development of components operating at
microwave and millimeter wave band. Due to this, SIW antennas
and array take the advantages of both classical metallic
waveguide, which includes high gain, high power capacity, low
cross polarization, and high selectivity, and that of planar
antennas which comprises low profile, light weight, low
fabrication cost, conformability to planar or bent surfaces, and
easy integration with planar circuits.
Index Terms—SIW; Vias; Antenna and Arrays; CRLH; CBA;
LWA; Horn Antenna.
This document discusses pulse code modulation (PCM) and quantization noise. It explains that quantizing an analog signal introduces quantization error or noise. The quantization noise is modeled as a random variable with a uniform probability distribution between +/- step size/2. The step size depends on the number of quantization levels. A higher number of quantization levels reduces the step size and quantization noise, increasing the signal-to-noise ratio. The document also discusses different types of quantizers like uniform, midtread and midrise quantizers.
1) The authors analyzed the dispersion properties of substrate integrated rectangular waveguides (SIRWs) using the BI-RME method combined with Floquet's theorem.
2) Their analysis showed that SIRWs have similar guided-wave characteristics as conventional rectangular waveguides. They derived empirical equations to estimate the cutoff frequencies of the first two dominant modes.
3) To validate their results, the authors designed and measured an SIRW, finding very good agreement between experimental and theoretical results.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
1. IoT Domain Analyst
Dr. Arvind Kumar | School of Electronics Engineering, VIT Vellore |
https://sites.google.com/view/arvindk
2. M.3: Simulation Scenarios
Models to simulate real-world scenarios, Application of the models,
stages of data lifecycle, reuse existing IoT solutions, reusability plan.
3. Model
• It is an abstraction from reality used to help understand the object or system being modeled.
• People use modeling all the time to make decisions in their everyday lives although they usually
don’t do so in a formal way.
Here some common things are models:
1. Maps are models of a portion of the earth’s surface.
2. Most computer games are models of real or imaginary worlds programmed in a computer.
3. Many toys are models of real objects, scaled down or changed in their operation so that
they are not dangerous or messy like toy trucks, guns, swords, dolls, dishes, stoves.
4. Model
• People naturally use their experiences to create mental models of things they encounter in
ways to help themselves learn and survive.
• Models that are run on a computer require the translation of a mental model into a set of
rules and structures that can be represented in mathematical terms using a programming or
modeling language.
5. Types of Models
1. Physical Models
• Physical models are scale representations of the same physical entities they represent.
• They are used primarily in engineering of large-scale projects to examine a limited set of
behavioral characteristics in the system.
2. Mathematical Models
• Mathematical models use mathematical equations to represent the key relationships among
system components.
• The equations can be derived in a number of ways. Many of them come from extensive
scientific studies that have formulated a mathematical relationship and then tested it against
real data.
• Mathematical models of large-scale systems often use a combination of approaches --
inserting tested equations where the relationships are well known and inserting statistical
relationships where there is less certainty.
6. Types of Models
• Such models can also use probabilistic relationships for events that are random or
exhibit some type of variable pattern.
• For example,
• Models of weather analyze the long-term weather records for the area under
consideration and calculate the frequency of different weather incidents.
• These are represented as their probability of occurrence, assuming that the past is
a strong indication of future events.
7. Types of Models
3. Simulation Models
• Simulation models are a special subset of mathematical or physical models that
allow the user to ask "what if" questions about the system.
• Changes are made in the physical conditions or their mathematical representation
and the model is run many times to "simulate" the impacts of the changes in the
conditions.
• The model results are then compared to gain insight into the behavior of the
system.
8. Modeling Terminology
• Accuracy –
• The closeness of a measured or modeled/computed value to its “true” value. The “true”
value is the value it would have if we had perfect information.
• Algorithm –
• A set of rules for solving some problem. On a computer, an algorithm is a set of rules in
computer code that solve a problem.
• Calibration –
• The process of adjusting model parameters within physically defensible ranges until the
resulting predictions give the best possible fit to the observed data.
9. Modeling Terminology
• Conceptual Model –
• A hypothesis regarding the important factors that govern the behavior of an
object or process of interest.
• This can be an interpretation or working description of the characteristics and
dynamics of a physical system.
• Deterministic Model –
• A model that provides a single solution for the variables being modeled.
• Because this type of model does not explicitly simulate the effects of data uncertainty
or variability, changes in model outputs are solely due to changes in model
components.
10. Modeling Terminology
• Empirical Model –
• It is one where the structure is determined by the observed statistical relationship
among experimental data.
• These models can be used to develop relationships that are useful for forecasting and
describing trends in behavior but they are not necessarily mechanistically relevant that is
they don’t explain the real causes and mechanisms for the relationships.
• Parameters –
• Terms in the model that are fixed during a model run or simulation but can be changed
in different runs as a method for conducting sensitivity analysis or to achieve calibration
goals.
11. Modeling Terminology
• Sensitivity –
• The degree to which the model outputs are affected by changes in a selected input parameters.
• Statistical Models –
• Models obtained by fitting observational data to a mathematical function.
• Stochastic Model –
• A model that includes variability in model parameters.
• This variability is a function of:
1. changing environmental conditions,
2. spatial and temporal aggregation within the model framework,
3. random variability.
• The solutions obtained by the model or output is therefore a function of model components and
random variability.
12. Modeling Terminology
• Variable –
• A measured or estimated quantity which describes an object or can be observed in a system and
which is subject to change.
• Validation –
• Answers the questions
• Is the science valid and does the model use current methods and techniques?
• Is the numerical model adequate to convey the science principles at the level of the question
being asked?
• Is the model arriving at an acceptably accurate representation of the phenomenon being
modeled?"
• Verification –
• Does the code for the model run correctly and provide a mathematically correct
answer?
• Do the algorithms being used accurately represent the mathematical function on the computer?
13. First Modeling Example
➢Model the time it takes to go through traffic from your house to a destination like
work.
• Let's say you need to decide the best route to take to work.
• To make this decision you will need to formulate at least one objective for your
trip.
• Here are some of the possible objectives.
• Minimize the amount of time it takes to get there
• Avoid traffic congestion.
• Find a route that excludes freeways.
• Plot a path between your house and work to make sure you travel by the
same spot every day.
14. First Modeling Example
• Assuming that we focus just on the first objective: Minimize the amount of time it takes
to get there
• We need to decide what will affect that objective.
• To do that, we must create a conceptual model of the system.
• List all of the variables that impact our travel time from home to work and what we believe
are the cause and effect relationships across all of those variables.
• For any model we are creating or studying, our ideas on the variables and cause and effect
relationships come from published information, analysis of data from a real system, and our
own knowledge of the system.
• The phenomena we are modeling may also be constrained by physical laws or prevailing
theories of their operation so our conceptual model should reflect those limitations.
15. First Modeling Example
• For our traffic example, we know that we need to traverse the street system to get from
one place to another and that we need to observe traffic laws governing speed, one-way
streets, and traffic control devices.
• The first part of the class exercise is to define as many of the conditions as possible that
will impact the travel time to work along with the cause and effect relationships among the
variables.
• We need to estimate the direction of the relationship, the form of the relationship, and,
if possible, a quantitative representation of that relationship.
16. First Modeling Example
• For example,
• we know that bad weather will slow traffic down.
• The worse the weather, the slower the traffic.
• We can hypothesize that the impact on traffic is non-linear.
• We may not have the data to exactly quantify the relationship but we could start with a
simple classification of weather events and an estimate of their impacts on the flow of
traffic.
• Here are some examples of weather conditions - can we fill in an estimate of the
impacts on traffic flow?
17. First Modeling Example
• As we simplify the model, we need to decide which phenomena will be represented as
• variables (items whose values will change based on the relationships represented
in the model)
• parameters (items that are assigned a reasonable constant value to represent a finite set of
conditions).
• For example, every stop sign you stop at may take a different amount of time depending upon
how many other cars are approaching the same intersection.
• However, you may choose to create a parameter that uses an average amount of stopping time
to represent the range of conditions rather than have to gather data or find some other way of
estimating the number of cars approaching each intersection during your trip.
19. Integrity in the Data LifeCycle
02-03-2022 https://sites.google.com/view/arvindk 19
20. The 5 Stages of
Data LifeCycle
Management
➢Data LifeCycle Management is a process that
helps organisations to manage the flow of
data throughout its lifecycle – from initial
creation through to destruction.
➢While there are many interpretations as to
the various phases of a typical data lifecycle,
they can be summarised as follows:
https://sites.google.com/view/arvindk
02-03-2022 20
21. 02-03-2022 https://sites.google.com/view/arvindk 21
• Manual Data Entry
• External Acquisition
• Capture from Devices
Creation
• Security
• Backup & Recovery
Storage
• Data viewing, processing, modification and saving
• Data Sharing
Usage
• Data Archived and Protected
• Available for use
Archival
• Purging
Destruction
22. 1. Data Creation
The first phase of the data lifecycle is the creation/capture of data. This
data can be in many forms e.g. PDF, image, Word document, SQL
database data. Data is typically created by an organisation in one of 3
ways:
▪ Data Acquisition: acquiring already existing data which has been
produced outside the organization.
▪ Data Entry: manual entry of new data by personnel within the
organization.
▪ Data Capture: capture of data generated by devices used in various
processes in the organization.
02-03-2022 https://sites.google.com/view/arvindk 22
23. 2. Storage
➢Once data has been created within the organisation, it needs to be
stored and protected, with the appropriate level of security applied.
➢A robust backup and recovery process should also be implemented to
ensure retention of data during the lifecycle.
02-03-2022 https://sites.google.com/view/arvindk 23
24. 3. Usage
➢During the usage phase of the data lifecycle, data is used to support
activities in the organisation.
➢Data can be viewed, processed, modified and saved.
➢An audit trail should be maintained for all critical data to ensure that
all modifications to data are fully traceable.
➢Data may also be made available to share with others outside the
organization.
02-03-2022 https://sites.google.com/view/arvindk 24
25. 4. Archival
➢Data Archival is the copying of data to an environment where it is
stored in case it is needed again in an active production environment,
and the removal of this data from all active production environments.
➢A data archive is simply a place where data is stored, but where no
maintenance or general usage occurs.
➢If necessary, the data can be restored to an environment where it can
be used.
02-03-2022 https://sites.google.com/view/arvindk 25
26. 5. Destruction
➢The volume of archived data inevitably grows, and while you may want to save all
your data forever, that’s not feasible.
➢Storage cost and compliance issues exert pressure to destroy data you no longer
need.
➢Data destruction or purging is the removal of every copy of a data item from an
organisation.
➢It is typically done from an archive storage location.
➢The challenge of this phase of the lifecycle is to ensure that the data has been
properly destroyed.
➢It is important to ensure before destroying data that the data items have
exceeded their required regulatory retention period.
➢Having a clearly defined and documented data lifecycle management process is
key to ensuring Data Governance can be carried out effectively within your
organisation.
02-03-2022 https://sites.google.com/view/arvindk 26
27. Reduce, Reuse, Recycle – IoT Solutions
➢With new consumer electronics emerging on the market – millions of tons of
electronic waste is produced worldwide, each year .
➢Everybody enjoys new technology but how many of us act environmentally
responsible when we buy our newest mobile or smart device?
➢There are steps we can all take to be more responsible towards the environment
when we design our IoT projects, such as using the Three R's principle.
➢Reduce, Reuse, Recycle (RRR) is a concept that applies to many modern-day
areas, such as building & architecture, food production, and technology, in the
struggle to be more socially responsible and to address
➢the huge amount of waste we can see growing around us.
02-03-2022 https://sites.google.com/view/arvindk 27
28. Reduce, Reuse, Recycle – IoT Solutions
➢So how should we rethink out IoT projects to comply with the Three R's principle?
➢Reduce: When designing the prototype of new projects, we should lower the
number of new items to buy .
For example,
➢ if you need a temperature sensor for your project, before ordering it online, ask
your techie friends if they have a spare one.
➢Reduce the energy our devices consume,
For example,
➢Reducing the clock frequency in the processor or lowering the sample rate of
sensors.
➢Put your Arduino in sleep mode for its idle periods and it can run for years on
battery .
02-03-2022 https://sites.google.com/view/arvindk 28
29. Reduce, Reuse, Recycle - an environmental
approach to your IoT
Reuse:
➢The concept is that we should reuse existing technology as much as
possible before buying a new product or gadget.
➢Some of the GSM modules found in out-of-date mobile phones offer the
same functionality you will find in a new GSM board.
➢Most of these modules work with AT commands via a common serial
interface and so do many old phones.
➢You could also consider using the camera from a refurbished smartphone.
02-03-2022 https://sites.google.com/view/arvindk 29
30. Reduce, Reuse, Recycle - an environmental
approach to your IoT
Reuse:
➢In addition, what about the flow sensors from a damaged coffee
machine or the water level sensor from a damaged washing machine?
➢If you need a new piece of hardware, just look around - you will find
it and you can boost your creativity.
02-03-2022 https://sites.google.com/view/arvindk 30
31. Reduce, Reuse, Recycle - an environmental
approach to your IoT
Recycle:
➢We are most familiar with this principle in our consumerist life.
➢However, recycling is not just selecting plastic from paper; it can also
be a design principle and a source of creativity .
➢You can donate the technology you do not use anymore to be reused
or refurbished, or you could hack into and reuse it yourself.
02-03-2022 https://sites.google.com/view/arvindk 31
33. Reuse: Solutions
➢In most engineering disciplines, systems are designed by composing
existing components that have been used in other systems.
➢Reusing of existing components establish development of new
technologies in less time and effort.
➢We need to adopt a design process that is based on systematic reuse
(a plan)
02-03-2022 https://sites.google.com/view/arvindk 33
34. Reuse types
System reuse
➢Complete systems, which may include several application programs.
Application system reuse
➢The whole of an application system may be reused either by
➢incorporating it without change into other systems (Commercial Off The-
Shelf - COTS reuse) or by developing application families.
Component reuse
➢Components of an application, like, sub-systems or single objects, may be
reused.
Object and function reuse
➢Software components that implement a single well defined object or
function may be reused.
02-03-2022 https://sites.google.com/view/arvindk 34
35. Reuse Benefits
➢Accelerated development
➢Effective use of specialists
➢Increased dependability
➢Lower development costs
➢Reduced process risk
➢Standards compliance
02-03-2022 https://sites.google.com/view/arvindk 35
40. SDLC - Overview
➢ Software Development Life Cycle (SDLC) is a process used by the
software industry to design, develop and test high quality softwares.
➢ The SDLC aims to produce a high-quality software that meets or
exceeds customer expectations, reaches completion within times and
cost estimates.
▪ It is also called as Software Development Process.
▪ SDLC is a framework defining tasks performed at each step in the
software development process.
▪ ISO/IEC 12207 is an international standard for software life-cycle
processes.
▪ It aims to be the standard that defines all the tasks required for
developing and maintaining software
02-03-2022 https://sites.google.com/view/arvindk 40
41. What is SDLC?
➢SDLC is a process followed for a software project, within a software
organization.
➢It consists of a detailed plan describing how to develop, maintain,
replace and alter or enhance specific software.
➢The life cycle defines a methodology for improving the quality of
software and the overall development process.
02-03-2022 https://sites.google.com/view/arvindk 41
42. 02-03-2022 https://sites.google.com/view/arvindk 42
• Manual Data Entry
• External Acquisition
• Capture from Devices
Creation
• Security
• Backup & Recovery
Storage
• Data viewing, processing, modification and saving
• Data Sharing
Usage
• Data Archived and Protected
• Available for use
Archival
• Purging
Destruction
43. Graphical representation of the various stages of a typical SDLC.
02-03-2022 https://sites.google.com/view/arvindk 43
SDLC
Planning
Defining
Designing
Building
Testing
Deploym
ent