call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
The document discusses cumulative distribution functions (CDFs) and probability density functions (PDFs) for continuous random variables. It provides definitions and properties of CDFs and PDFs. For CDFs, it describes how they give the probability that a random variable is less than or equal to a value. For PDFs, it explains how they provide the probability of a random variable taking on a particular value. The document also gives examples of CDFs and PDFs for exponential and uniform random variables.
A Geometric Note on a Type of Multiple Testing-07-24-2015Junfeng Liu
This document summarizes new perspectives on false discovery rate (FDR) control procedures for multiple testing. It examines FDR control using linear and quadratic rejection cut-off routes applied to ordered p-values. Key findings include: 1) the FDR is controlled at π0q regardless of where the Ha p-value profile crosses the no-rejection boundary, 2) specificity approaches limits as the Ha mean increases, 3) quadratic cuts control FDR better when Ha means are close to zero. Numerical simulations explore the impact of factors like population size, variation levels, and mean profiles on discovery rates and FDR.
This document provides an overview of probability distributions and related concepts. It defines key probability distributions like the binomial, beta, multinomial, and Dirichlet distributions. It also describes probability distribution equations like the cumulative distribution function and probability density function. Additionally, it outlines descriptive parameters for distributions like mean, variance, skewness and kurtosis. Finally, it briefly discusses probability theorems such as the law of large numbers, central limit theorem, and Bayes' theorem.
This document provides a concise probability cheatsheet compiled by William Chen and others. It covers key probability concepts like counting rules, sampling tables, definitions of probability, independence, unions and intersections, joint/marginal/conditional probabilities, Bayes' rule, random variables and their distributions, expected value, variance, indicators, moment generating functions, and independence of random variables. The cheatsheet is licensed under CC BY-NC-SA 4.0 and the last updated date is March 20, 2015.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
A Tau Approach for Solving Fractional Diffusion Equations using Legendre-Cheb...iosrjce
In this paper, a modified numerical algorithm for solving the fractional diffusion equation is
proposed. Based on Tau idea where the shifted Legendre polynomials in time and the shifted Chebyshev
polynomials in space are utilized respectively.
The problem is reduced to the solution of a system of linear algebraic equations. From the computational point
of view, the solution obtained by this approach is tested and the efficiency of the proposed method is confirmed.
The document discusses cumulative distribution functions (CDFs) and probability density functions (PDFs) for continuous random variables. It provides definitions and properties of CDFs and PDFs. For CDFs, it describes how they give the probability that a random variable is less than or equal to a value. For PDFs, it explains how they provide the probability of a random variable taking on a particular value. The document also gives examples of CDFs and PDFs for exponential and uniform random variables.
A Geometric Note on a Type of Multiple Testing-07-24-2015Junfeng Liu
This document summarizes new perspectives on false discovery rate (FDR) control procedures for multiple testing. It examines FDR control using linear and quadratic rejection cut-off routes applied to ordered p-values. Key findings include: 1) the FDR is controlled at π0q regardless of where the Ha p-value profile crosses the no-rejection boundary, 2) specificity approaches limits as the Ha mean increases, 3) quadratic cuts control FDR better when Ha means are close to zero. Numerical simulations explore the impact of factors like population size, variation levels, and mean profiles on discovery rates and FDR.
This document provides an overview of probability distributions and related concepts. It defines key probability distributions like the binomial, beta, multinomial, and Dirichlet distributions. It also describes probability distribution equations like the cumulative distribution function and probability density function. Additionally, it outlines descriptive parameters for distributions like mean, variance, skewness and kurtosis. Finally, it briefly discusses probability theorems such as the law of large numbers, central limit theorem, and Bayes' theorem.
This document provides a concise probability cheatsheet compiled by William Chen and others. It covers key probability concepts like counting rules, sampling tables, definitions of probability, independence, unions and intersections, joint/marginal/conditional probabilities, Bayes' rule, random variables and their distributions, expected value, variance, indicators, moment generating functions, and independence of random variables. The cheatsheet is licensed under CC BY-NC-SA 4.0 and the last updated date is March 20, 2015.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
A Tau Approach for Solving Fractional Diffusion Equations using Legendre-Cheb...iosrjce
In this paper, a modified numerical algorithm for solving the fractional diffusion equation is
proposed. Based on Tau idea where the shifted Legendre polynomials in time and the shifted Chebyshev
polynomials in space are utilized respectively.
The problem is reduced to the solution of a system of linear algebraic equations. From the computational point
of view, the solution obtained by this approach is tested and the efficiency of the proposed method is confirmed.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document outlines the key concepts that will be covered in Lecture 2 on Bayesian modeling. It introduces the likelihood function and how it can be used to determine the most likely parameter values given observed data. It provides examples of applying Bayesian modeling to proportions, normal distributions, linear regression with one predictor, and linear regression with multiple predictors. The lecture aims to give students a basic understanding of how Bayesian analysis works and prepare them for fitting linear mixed models.
The document discusses statistical models and exponential families. It states that for most of the course, data is assumed to be a random sample from a distribution F. Repetition of observations via the law of large numbers and central limit theorem increases information about F. Exponential families are a class of parametric distributions with convenient analytic properties, where the density can be written as a function of natural parameters in an exponential form. Examples of exponential families include the binomial and normal distributions.
This document provides an introduction to bootstrap methods and Markov chains. It discusses how bootstrap can be used to estimate properties of a statistic like mean or variance when the sample is small and assumptions of the central limit theorem may not apply. The basic bootstrap approach resamples the original sample with replacement to create new bootstrap samples and estimates the statistic for each. Markov chains are defined as stochastic processes where the next state only depends on the current state. An example of a 2-state Markov chain is provided along with notation for transition probabilities and computing unconditional probabilities. The document also discusses stationary distributions for Markov chains.
The document discusses numerical methods for solving nonlinear equations, including root finding and systems of nonlinear equations. It covers the basics of nonlinear solvers like bisection, Newton's method, and fixed-point iteration. For one-dimensional root finding, it analyzes the convergence properties and order of convergence for these methods. It then extends the discussion to systems of nonlinear equations and shows how Newton's method can be applied by taking derivatives to form the Jacobian matrix.
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
Statistics (1): estimation, Chapter 2: Empirical distribution and bootstrapChristian Robert
The document discusses the bootstrap method and its applications in statistical inference. It introduces the bootstrap as a technique for estimating properties of estimators like variance and distribution when the true sampling distribution is unknown. This is done by treating the observed sample as if it were the population and resampling with replacement to create new simulated samples. The bootstrap then approximates characteristics of the sampling distribution, allowing inferences like confidence intervals to be constructed.
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
This document provides an overview of supervised learning and linear regression. It introduces supervised learning problems using an example of predicting house prices based on living area. Linear regression is discussed as an initial approach to model this relationship. The cost function is defined as the mean squared error between predictions and targets. Gradient descent and stochastic gradient descent are presented as algorithms to minimize this cost function and learn the parameters of the linear regression model.
This document discusses quantiles and quantile regression. It begins by defining quantiles for the standard normal distribution and shows how to calculate probabilities based on quantiles. It then discusses how to estimate quantiles from sample data and different methods for calculating empirical quantiles. The document introduces quantile regression as a way to model relationships between variables at different quantile levels. It explains how quantile regression is formulated as an optimization problem and compares it to ordinary least squares regression.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
Lecture slides on Decision Theory. The contents in large part come from the following excellent textbook.
Rubinstein, A. (2012). Lecture notes in microeconomic theory: the
economic agent, 2nd.
http://www.amazon.co.jp/dp/B0073X0J7Q/
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
The document discusses hypothesis testing from both frequentist and Bayesian perspectives. It introduces the concept of statistical tests as functions that output accept or reject decisions for hypotheses. P-values are presented as a way to quantify uncertainty in these decisions. Bayes' original 1763 paper on Bayesian statistics is summarized, introducing the concept of the posterior distribution. Bayesian hypothesis testing is then discussed, including the optimal Bayes test and the use of Bayes factors to compare hypotheses without requiring prior probabilities on the hypotheses.
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
The document discusses likelihood functions and inference. It begins by defining the likelihood function as the function that gives the probability of observing a sample given a parameter value. The likelihood varies with the parameter, while the density function varies with the data. Maximum likelihood estimation chooses parameters that maximize the likelihood function. The score function is the gradient of the log-likelihood and has an expected value of zero at the true parameter value. The Fisher information matrix measures the curvature of the likelihood surface and provides information about the precision of parameter estimates. It relates to the concentration of likelihood functions around the true parameter value as sample size increases.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
In this talk, we give an overview of results on numerical integration in Hermite spaces. These spaces contain functions defined on $\mathbb{R}^d$, and can be characterized by the decay of their Hermite coefficients. We consider the case of exponentially as well as polynomially decaying Hermite coefficients. For numerical integration, we either use Gauss-Hermite quadrature rules or algorithms based on quasi-Monte Carlo rules. We present upper and lower error bounds for these algorithms, and discuss their dependence on the dimension $d$. Furthermore, we comment on open problems for future research.
This document discusses random variables and their probability distributions. It defines different types of random variables such as real, complex, discrete, continuous, and mixed. It also defines key concepts such as sample space, cumulative distribution function (CDF), and probability density function (PDF) for both discrete and continuous random variables. Examples are provided to illustrate how to calculate the CDF and PDF for different random variables. Properties of CDFs and PDFs are also covered.
Lecture 2 predicates quantifiers and rules of inferenceasimnawaz54
1) Predicates become propositions when variables are quantified by assigning values or using quantifiers. Quantifiers like ∀ and ∃ are used to make statements true or false for all or some values.
2) ∀ (universal quantifier) means "for all" and makes a statement true for all values of a variable. ∃ (existential quantifier) means "there exists" and makes a statement true if it is true for at least one value.
3) Predicates with unbound variables are neither true nor false. Binding variables by assigning values or using quantifiers turns predicates into propositions that can be evaluated as true or false.
Common Fixed Theorems Using Random Implicit Iterative Schemesinventy
This document summarizes research on common fixed point theorems using random implicit iterative schemes. It defines random Mann, Ishikawa, and SP iterative schemes. It also defines modified implicit random iterative schemes associated with families of random asymptotically nonexpansive operators. The paper proves the convergence of two random implicit iterative schemes to a random common fixed point. This generalizes previous results and provides new convergence theorems for random operators in Banach spaces.
This document discusses two continuous probability distributions: the uniform distribution and the exponential distribution. It provides the probability density functions and cumulative distribution functions for each distribution. Examples are also given to demonstrate calculating probabilities using each distribution.
Binomial, Geometric and Poisson distributions in excelBrent Heard
This document provides instructions for downloading and using Excel templates to calculate binomial, geometric, and Poisson distributions. It demonstrates how to access and set up the templates from a website. Examples are worked through for each distribution type to show how values can be input and the relevant probabilities calculated. The templates also automatically provide additional descriptive statistics for the distributions.
"reflections on the probability space induced by moment conditions with impli...Christian Robert
This document discusses using moment conditions to perform Bayesian inference when the likelihood function is intractable or unknown. It outlines some approaches that have been proposed, including approximating the likelihood using empirical likelihood or pseudo-likelihoods. However, these approaches do not guarantee the same consistency as a true likelihood. Alternative approximative Bayesian methods are also discussed, such as Approximate Bayesian Computation, Integrated Nested Laplace Approximation, and variational Bayes. The empirical likelihood method constructs a likelihood from generalized moment conditions, but its use in Bayesian inference requires further analysis of consistency in each application.
This document outlines the key concepts that will be covered in Lecture 2 on Bayesian modeling. It introduces the likelihood function and how it can be used to determine the most likely parameter values given observed data. It provides examples of applying Bayesian modeling to proportions, normal distributions, linear regression with one predictor, and linear regression with multiple predictors. The lecture aims to give students a basic understanding of how Bayesian analysis works and prepare them for fitting linear mixed models.
The document discusses statistical models and exponential families. It states that for most of the course, data is assumed to be a random sample from a distribution F. Repetition of observations via the law of large numbers and central limit theorem increases information about F. Exponential families are a class of parametric distributions with convenient analytic properties, where the density can be written as a function of natural parameters in an exponential form. Examples of exponential families include the binomial and normal distributions.
This document provides an introduction to bootstrap methods and Markov chains. It discusses how bootstrap can be used to estimate properties of a statistic like mean or variance when the sample is small and assumptions of the central limit theorem may not apply. The basic bootstrap approach resamples the original sample with replacement to create new bootstrap samples and estimates the statistic for each. Markov chains are defined as stochastic processes where the next state only depends on the current state. An example of a 2-state Markov chain is provided along with notation for transition probabilities and computing unconditional probabilities. The document also discusses stationary distributions for Markov chains.
The document discusses numerical methods for solving nonlinear equations, including root finding and systems of nonlinear equations. It covers the basics of nonlinear solvers like bisection, Newton's method, and fixed-point iteration. For one-dimensional root finding, it analyzes the convergence properties and order of convergence for these methods. It then extends the discussion to systems of nonlinear equations and shows how Newton's method can be applied by taking derivatives to form the Jacobian matrix.
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
Statistics (1): estimation, Chapter 2: Empirical distribution and bootstrapChristian Robert
The document discusses the bootstrap method and its applications in statistical inference. It introduces the bootstrap as a technique for estimating properties of estimators like variance and distribution when the true sampling distribution is unknown. This is done by treating the observed sample as if it were the population and resampling with replacement to create new simulated samples. The bootstrap then approximates characteristics of the sampling distribution, allowing inferences like confidence intervals to be constructed.
The document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It outlines perturbation and projection methods for approximating the solution to DSGE models. Perturbation methods use Taylor series approximations around a steady state to derive linear approximations of the model. Projection methods find parametric functions that best satisfy the model equations. The document also provides an example of applying the implicit function theorem to derive a Taylor series approximation of a policy rule for a neoclassical growth model.
This document provides an overview of supervised learning and linear regression. It introduces supervised learning problems using an example of predicting house prices based on living area. Linear regression is discussed as an initial approach to model this relationship. The cost function is defined as the mean squared error between predictions and targets. Gradient descent and stochastic gradient descent are presented as algorithms to minimize this cost function and learn the parameters of the linear regression model.
This document discusses quantiles and quantile regression. It begins by defining quantiles for the standard normal distribution and shows how to calculate probabilities based on quantiles. It then discusses how to estimate quantiles from sample data and different methods for calculating empirical quantiles. The document introduces quantile regression as a way to model relationships between variables at different quantile levels. It explains how quantile regression is formulated as an optimization problem and compares it to ordinary least squares regression.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
Lecture slides on Decision Theory. The contents in large part come from the following excellent textbook.
Rubinstein, A. (2012). Lecture notes in microeconomic theory: the
economic agent, 2nd.
http://www.amazon.co.jp/dp/B0073X0J7Q/
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
The document discusses hypothesis testing from both frequentist and Bayesian perspectives. It introduces the concept of statistical tests as functions that output accept or reject decisions for hypotheses. P-values are presented as a way to quantify uncertainty in these decisions. Bayes' original 1763 paper on Bayesian statistics is summarized, introducing the concept of the posterior distribution. Bayesian hypothesis testing is then discussed, including the optimal Bayes test and the use of Bayes factors to compare hypotheses without requiring prior probabilities on the hypotheses.
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
The document discusses likelihood functions and inference. It begins by defining the likelihood function as the function that gives the probability of observing a sample given a parameter value. The likelihood varies with the parameter, while the density function varies with the data. Maximum likelihood estimation chooses parameters that maximize the likelihood function. The score function is the gradient of the log-likelihood and has an expected value of zero at the true parameter value. The Fisher information matrix measures the curvature of the likelihood surface and provides information about the precision of parameter estimates. It relates to the concentration of likelihood functions around the true parameter value as sample size increases.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
In this talk, we give an overview of results on numerical integration in Hermite spaces. These spaces contain functions defined on $\mathbb{R}^d$, and can be characterized by the decay of their Hermite coefficients. We consider the case of exponentially as well as polynomially decaying Hermite coefficients. For numerical integration, we either use Gauss-Hermite quadrature rules or algorithms based on quasi-Monte Carlo rules. We present upper and lower error bounds for these algorithms, and discuss their dependence on the dimension $d$. Furthermore, we comment on open problems for future research.
This document discusses random variables and their probability distributions. It defines different types of random variables such as real, complex, discrete, continuous, and mixed. It also defines key concepts such as sample space, cumulative distribution function (CDF), and probability density function (PDF) for both discrete and continuous random variables. Examples are provided to illustrate how to calculate the CDF and PDF for different random variables. Properties of CDFs and PDFs are also covered.
Lecture 2 predicates quantifiers and rules of inferenceasimnawaz54
1) Predicates become propositions when variables are quantified by assigning values or using quantifiers. Quantifiers like ∀ and ∃ are used to make statements true or false for all or some values.
2) ∀ (universal quantifier) means "for all" and makes a statement true for all values of a variable. ∃ (existential quantifier) means "there exists" and makes a statement true if it is true for at least one value.
3) Predicates with unbound variables are neither true nor false. Binding variables by assigning values or using quantifiers turns predicates into propositions that can be evaluated as true or false.
Common Fixed Theorems Using Random Implicit Iterative Schemesinventy
This document summarizes research on common fixed point theorems using random implicit iterative schemes. It defines random Mann, Ishikawa, and SP iterative schemes. It also defines modified implicit random iterative schemes associated with families of random asymptotically nonexpansive operators. The paper proves the convergence of two random implicit iterative schemes to a random common fixed point. This generalizes previous results and provides new convergence theorems for random operators in Banach spaces.
This document discusses two continuous probability distributions: the uniform distribution and the exponential distribution. It provides the probability density functions and cumulative distribution functions for each distribution. Examples are also given to demonstrate calculating probabilities using each distribution.
Binomial, Geometric and Poisson distributions in excelBrent Heard
This document provides instructions for downloading and using Excel templates to calculate binomial, geometric, and Poisson distributions. It demonstrates how to access and set up the templates from a website. Examples are worked through for each distribution type to show how values can be input and the relevant probabilities calculated. The templates also automatically provide additional descriptive statistics for the distributions.
The document discusses several continuous probability distributions including the uniform, normal, and exponential distributions. It provides the probability density functions and key characteristics of each distribution. Examples are given to demonstrate calculating probabilities and parameter values for the uniform and normal distributions. Excel functions are also introduced for computing probabilities and values for normal distributions.
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
This document provides an overview of key concepts in continuous probability distributions, including the uniform and normal distributions. It discusses computing probabilities using these distributions, such as finding the probability of an observation occurring between two values. Examples are provided to demonstrate calculating means, standard deviations, and probabilities for uniform and normal distributions based on real-world scenarios. Formulas and Excel functions are also presented for determining values and areas under the normal curve.
This document defines key terms and concepts related to probability distributions, including discrete and continuous random variables, and the mean, variance, and standard deviation of probability distributions. It also describes the characteristics and computations for the binomial, hypergeometric, and Poisson probability distributions. Examples are provided to illustrate how to calculate probabilities using these three specific probability distributions.
Binomial and Poission Probablity distributionPrateek Singla
The document discusses binomial and Poisson distributions. Binomial distribution describes random events with two possible outcomes, like success/failure. Poisson distribution models rare, independent events occurring randomly over an interval of time/space. An example calculates the probability of defective thermometers using binomial distribution. It also fits a Poisson distribution to automobile accident data from a 50-day period.
Probability
Random variables and Probability Distributions
The Normal Probability Distributions and Related Distributions
Sampling Distributions for Samples from a Normal Population
Classical Statistical Inferences
Properties of Estimators
Testing of Hypotheses
Relationship between Confidence Interval Procedures and Tests of Hypotheses.
The document discusses various probability distributions including discrete and continuous distributions. It provides examples of discrete distributions such as the binomial, geometric, and Poisson distributions. It also discusses continuous distributions like the normal, exponential, and other distributions. The key points are that probability distributions describe the probabilities of possible outcomes of random variables, can be discrete or continuous, and the normal distribution is important due to the central limit theorem.
Interpolation techniques - Background and implementationQuasar Chunawala
This document discusses interpolation techniques, specifically Lagrange interpolation. It begins by introducing the problem of interpolation - given values of an unknown function f(x) at discrete points, finding a simple function that approximates f(x).
It then discusses using Taylor series polynomials for interpolation when the function value and its derivatives are known at a point. The error in interpolation approximations is also examined.
The main part discusses Lagrange interpolation - given data points (xi, f(xi)), there exists a unique interpolating polynomial Pn(x) of degree N that passes through all the points. This is proved using the non-zero Vandermonde determinant. Lagrange's interpolating polynomial is then introduced as a solution.
The document discusses various probability distributions including the binomial, Poisson, and normal distributions. It provides definitions and key properties of each distribution. It also discusses sampling with and without replacement as well as the Monte Carlo method for simulating physical systems using random sampling. The Monte Carlo method can be used to computationally estimate values like pi by simulating the throwing of darts at a circular target.
1. The document discusses different types of probability distributions including discrete, continuous, binomial, Poisson, and normal distributions.
2. It provides examples of how to calculate probabilities and expected values for each distribution using concepts like probability density functions, mean, standard deviation, and combinations.
3. Key differences between distributions are highlighted such as discrete probabilities being determined by areas under a curve for continuous distributions and Poisson distribution approximating binomial for large numbers of trials.
The document discusses random processes and probability theory concepts relevant to communication systems. It defines key terms like random variables, sample space, events, probability, and distributions. It describes different types of random variables like discrete and continuous, and their probability mass functions and density functions. It also discusses statistical measures like mean, variance, and covariance that are used to characterize random signals and compare signals. Specific random variables discussed include binomial and uniform. The document provides foundations for analyzing random signals in communication systems.
The document provides an overview of key concepts in probability theory and stochastic processes. It defines fundamental terms like sample space, events, probability, conditional probability, independence, random variables, and common probability distributions including binomial, Poisson, exponential, uniform, and Gaussian distributions. Examples are given for each concept to illustrate how it applies to modeling random experiments and computing probabilities. The three main axioms of probability are stated. Key properties and formulas for expectation, variance, and conditional expectation are also summarized.
Basics of probability in statistical simulation and stochastic programmingSSA KPI
AACIMP 2010 Summer School lecture by Leonidas Sakalauskas. "Applied Mathematics" stream. "Stochastic Programming and Applications" course. Part 2.
More info at http://summerschool.ssa.org.ua
A polynomial interpolation algorithm is developed using the Newton's divided-difference interpolating polynomials. The definition of monotony of a function is then used to define the least degree of the polynomial to make efficient and consistent the interpolation in the discrete given function. The relation between the order of monotony of a particular function and the degree of the interpolating polynomial is justified, analyzing the relation between the derivatives of such function and the truncation error expression. In this algorithm there is not matter about the number and the arrangement of the data points, neither if the points are regularly spaced or not. The algorithm thus defined can be used to make interpolations in functions of one and several dependent variables. The algoritm automatically select the data points nearest to the point where an interpolation is desired, following the criterion of symmetry. Indirectly, the algorithm also select the number of data points, which is a unity higher than the order of the used polynomial, following the criterion of monotony. Finally, the complete algoritm is presented and subroutines in fortran code is exposed as an addendum. Notice that there is not the degree of the interpolating polynomial within the arguments of such subroutines.
The document discusses random variables and probability distributions. It defines a random variable as a function that assigns a numerical value to each outcome in a sample space. Random variables can be discrete or continuous. The probability distribution of a random variable describes its possible values and the probabilities associated with each value. It then discusses the binomial distribution in detail as an example of a theoretical probability distribution. The binomial distribution applies when there are a fixed number of independent yes/no trials, each with the same constant probability of success.
1) The document reviews concepts from probability and statistics including discrete and continuous random variables, their distributions (e.g. binomial, Poisson, normal), and multivariate distributions.
2) It then discusses key properties of multivariate normal distributions including their probability density function and how marginal and conditional distributions can be derived from the joint distribution.
3) Concepts like independence, mean vectors, covariance matrices, and their implications are also covered as they relate to multivariate normal distributions.
1. The document covers probability axioms and rules including the additive rule, conditional probability, independence, and Bayes' rule. It also defines discrete and continuous random variables and their probability distributions.
2. Important discrete distributions discussed include the Bernoulli distribution for a binary outcome experiment and the binomial distribution for repeated Bernoulli trials.
3. Techniques for counting permutations, combinations, and sequences of events are presented to handle probability problems involving counting.
I am Ben R. I am a Statistics Assignment Expert at statisticshomeworkhelper.com. I hold a Ph.D. in Statistics, from University of Denver, USA. I have been helping students with their homework for the past 5 years. I solve assignments related to Statistics.
Visit statisticshomeworkhelper.com or email info@statisticshomeworkhelper.com.
You can also call on +1 678 648 4277 for any assistance with Statistics Assignment.
This document provides an overview of one-dimensional random variables including definitions, types (discrete vs continuous), and probability distributions. It defines a random variable as a function that assigns a numerical value to each outcome of a random experiment. Random variables can be either discrete, taking on countable values, or continuous, assuming any value in an interval. The probability distribution of a discrete random variable is defined by a probability mass function, while a continuous random variable has a probability density function. Examples are given of both types of random variables and their distributions.
This document provides an overview of some common continuous and discrete probability distributions, including their probability density/mass functions, cumulative distribution functions, expected values, variances, and moment generating functions. It covers standard and rescaled versions of distributions like the uniform, normal, exponential, gamma, binomial, Poisson, geometric, negative binomial, and hypergeometric distributions. Formulas are provided for transforming between standard and rescaled versions.
2 Review of Statistics. 2 Review of Statistics.WeihanKhor2
This document provides an overview of discrete probability distributions, including the binomial and Poisson distributions.
1) It defines key concepts such as random variables, probability mass functions, and expected value as they relate to discrete random variables. 2) The binomial distribution describes independent Bernoulli trials with a constant probability of success, and is used to calculate probabilities of outcomes from events like coin flips. 3) The Poisson distribution approximates the binomial when the number of trials is large and the probability of success is small. It models rare, independent events with a constant average rate and can be used for problems involving traffic accidents or natural disasters.
This document provides an overview of Dirichlet processes and their applications. It begins with background on probability mass functions and density functions. It then discusses the probability simplex and the Dirichlet distribution. The Dirichlet process is defined as a distribution over distributions that allows modeling probability distributions over infinite sample spaces. An example application involves using Dirichlet processes to learn hierarchical morphology paradigms by modeling stems and suffixes as being generated independently from Dirichlet processes. References for further reading are also provided.
This lecture covers random variables and probability distributions important in genetics and genomics. It defines random variables and discusses two types: discrete and continuous. Probability distributions of random variables include the probability mass function (pmf) for discrete variables and the probability density function (pdf) for continuous variables. Key distributions covered include the binomial, hypergeometric, Poisson, and normal distributions. It also discusses using cumulative distribution functions (CDFs) to calculate probabilities, and the concepts of expectation, variance, and the central limit theorem.
Similar to Welcome to International Journal of Engineering Research and Development (IJERD) (20)
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
This document summarizes a study on reducing corrosion rates in steel through welding design. The researchers tested different welding groove designs (X, V, 1/2X, 1/2V) and preheating temperatures (400°C, 500°C, 600°C) on ferritic malleable iron samples. Testing found that X and V groove designs with 500°C and 600°C preheating had corrosion rates of 0.5-0.69% weight loss after 14 days, compared to 0.57-0.76% for 400°C preheating. Higher preheating reduced residual stresses which decreased corrosion. Residual stresses were 1.7 MPa for optimal X groove and 600°C
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
The document summarizes a study on the use of ground granulated blast furnace slag (GGBS) and limestone powder to replace cement in self-compacting concrete (SCC). Tests were conducted on SCC mixes with 0-50% replacement of cement with GGBS and 0-20% replacement with limestone powder. The results showed that replacing 30% of cement with GGBS and 15% with limestone powder produced SCC with the highest compressive strength of 46MPa, meeting fresh property requirements. The study concluded that this ternary blend of cement, GGBS and limestone powder can improve SCC properties while reducing costs.
Seismic Drift Consideration in soft storied RCC buildings: A Critical ReviewIJERD Editor
Reinforced concrete frame buildings are becoming increasingly common in urban India. Many such
buildings constructed in recent times have a special feature – the ground storey is left open for the purpose of
parking, i.e., columns in the ground floor do not have any partition walls (of either masonry or
Reinforced concrete) between them. Such buildings are often called open ground storey buildings. The
relative horizontal displacement in the ground storey is much larger than storeys above it. The total horizontal
earthquake force it can carry in the ground storey is significantly smaller than storeys above it. The soft or weak
storey may exist at any storey level other than ground storey level. The presence of walls in upper storeys
makes them much stiffer than the open ground storey. Still Multi storey reinforced concrete buildings are
continuing to be built in India which has open ground storeys. It is imperative to know the behavior of
soft storey building to the seismic load for designing various retrofit strategies. Hence it is important to
study and understand the response of such buildings and make such buildings earthquake resistant based
on the study to prevent their collapse and to save the loss of life and property.
Post processing of SLM Ti-6Al-4V Alloy in accordance with AMS 4928 standardsIJERD Editor
This Research work was done to find out the impact of AMS 4928 standard heat treatment on
Selective Laser Melted (SLM) Ti-6Al-4V Grade 23 alloy. Ti-6Al-4V Grade 23 is an Extra Low Interstitial
version of Ti alloy with lower impurities and is α+β type alloy at room temperature. SLM is one type of method
in Additive Manufacturing based on Powder bed system. Each powder layer of few microns is coated and a laser
beam is scanned to melt the metal powder according to the specification of the part and subsequently moved
downwards layer by layer. The test coupons were first heat treated according to the above mentioned standard.
The tensile testing and the microstructural analysis were done to compare the results with that of mentioned in
the AMS 4928.The yield stress andPercentage elongation in the test coupons achieved are better than the
minimum requirement by AMS 4928 standard. Coarse lamellar grain structures were obtained with no
continuous network of alpha at prior beta grain boundaries.
Post processing of SLM Ti-6Al-4V Alloy in accordance with AMS 4928 standards
Welcome to International Journal of Engineering Research and Development (IJERD)
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN : 2278-800X, www.ijerd.com
Volume 5, Issue 10 (January 2013), PP. 11-18
A Case Study of Bank Queueing Model
Kasturi Nirmala1, Dr.Shahnaz Bathul PhD [I.I.T. Kharagpur] 2
1
Flat No: 108, F Block, Sathyanarayana Enclave, Madinaguda, Miyapur, Hyderabad, 500049, (A.P), India
2
Professor, Dept of Mathematics, JNTUH College of Engineering, Kukatpally, Hyderabad, 500085 (A.P), India.
Abstract:- This paper deals with the Queueing theory and the analysis of queueing system by using
probability curves. Starting with the basis of the distributions and important concepts of queueing
theory, probability curves of Gamma distribution are used to analyse the banking service.
Keywords:- Arrival rate, Service rate, Poission process, Probability distribution, Gamma function,
Gamma distributions, Probability graph, Random variable.
I. INTRODUCTION
Important concepts
Discrete Random variables [4]
If ξ is an experiment having a sample space S and X is a function that assigns a real number X(e) to
every outcome e Є S, then X(e) is called Random variable. And when it is associated with counting then the
Random variable is termed as Discrete Random variable.
Continuous random variable
If ξ is an experiment having a sample space S and X is a function that associates with one or more
intervals, then X is called continuous.
Probability Distribution of a Random variable
The family of probability distributions that has special importance in probability is the following.
If X is a discrete random variable, we associate a number P x (xi) = P x (X = xi) with each outcome xi in R x for I
= 1, 2 …, n ... where the numbers P x (xi) satisfy the following:
P x (xi) > 0 for all i.
=1
The function P x is called the probability law of random variable and the pairs [x i, P x (xi)] is called the
probability distribution of X.
If X is continuous random variable, then for δ > 0,
P X (X = x) = x+ δ)-F x (x)] = 0
We define probability density function f x (x) as f x (x) = F X (x)
And it follows that F X (x) =
f x (x) ≥ 0 for all x Є RX
Then P {e Є S: a< X (e) ≤ b} = P X (a ≤ X ≤ b) =
Characteristics of some discrete distributions:
The stem of the probability distributions is Bernolli trials. There are experiments such as each
experiment consists of n trials. The possible outcome of each trial is success or failure.
Bernoulli’s Distribution:
Perform an experiment ξ I (i th ) whose sample space is ЅI = {S, F}. Now the random variable Xi defined
as X I (s) is mapped to 1 and X I (f) is mapped to 0 of Real numbers. The range space Rx is subset of real
numbers is [0, 1]. The probability law defined in this Bernoulli’s distribution with respective to the random
variable is
11
2. A Case Study of Bank Queueing Model
P i (ri) = p when r i = 1, I = 1, 2 ……n.
= 1-p when r i = 0, I = 1, 2,……n.
The random process of Bernoulli’s trials was the origin for some of the discrete distributions. The binomial, the
geometric, the pascal, and the negative binomial are based on the proceedings of Bernoulli’s trials.
The Binomial Distribution: The random variable X which is defined as “the number of successes in n
Binomial trials” has Binomial distribution probability. Defined as
P(r) = ncr pr (1-p) n-r when n = 0, 1, 2, 3…, n.
= 0 otherwise.
X is defined as no. of successes in n Bernoulli’s trials.
Geometric distribution:
The geometric distribution is also dependant on the Bernoulli’s trials. The difference between the
Binomial Bernoulli’s trials and Geometric Bernoulli’s trials is the number of trials in Binomial is fixed and in
that of geometric the number of trials is not fixed. The random variable is defined as “the number of trials
required to get first success”. The probability distribution of random variable X is
P (r) = p qr-1 when r = 1, 2, 3……
=0 otherwise.
The Pascal distribution:
The Pascal distribution also has its origin in Bernoulli’s trials. In this random variable X described as
the trial on which the r th success occurs, where r is an integer. The probability of X is defined as
P(r) = n-1cr-1 pr (1-p) n-r when n = r, r+1, r+2 ….
= 0 otherwise.
The Poission distribution:
This distribution is used in real world phenomenon. We are not doing any artificial experiments in
which we get the results either success or failure. We consider only time oriented occurrences (arrivals). As in
the previous distributions we do not make n trials but instead we take number of arrivals that occur in the
particular time interval [0, t].
This distribution is discrete distribution and developed in two ways. The first development involves the
Poission process. The second development involves that the Poission distribution to be a limiting case of the
Binomial distribution.
Development of Poission process: In this process the random variable has practical applicability. In
defining Poission process, we initially consider a collection of arbitrary time oriented occurrences also called
“arrivals or births”. The random variable X t is the “number of arrivals that occur in the particular time interval
[0, t]”. The range space R Xt = {0, 1, 2, 3 ….}. In developing the Poission probability distribution of random
variable X t , it is necessary to take some assumptions. They are
1. The number of arrivals during the non overlapping time intervals is independent random variables.
12
3. A Case Study of Bank Queueing Model
2. We make assumption that there exists a positive quantity such that for any small interval, t the
following postulates are satisfied.
The probability that exactly one arrival will occur in an interval of width t is approximately t. The
approximation is in the sense that the probability is
t. + o1 (t) where the function [o1 (t)/ t] → 0 as t → 0.
The probability that exactly zero arrival will occur in the interval is approximately
1-t. Again this is in the sense that it is equal to 1-t+o2 (t) and [o2 (t)/ t] → 0
As t → 0.
The probability that two or more arrivals occur in the interval is equal to a quantity
O3 (t), where [o3 (t)/ t] → 0 as t → 0.
The parameter is sometimes called mean arrival rate or mean occurrence rate. And Poisson
developed and summarized the following
P n (t) = when n = 0, 1, 2, 3….
= 0 otherwise.
The development of Poisson distribution from the Binomial:
The Binomial distribution is
P(r) = ncr pr (1-p) n-r when n = 0, 1, 2, 3…, n.
= 0 otherwise.
If we let np = c so that p = and 1-p = 1- = and if we then replace terms involving p with corresponding
terms involving c we obtain
P(r) = [ ]r [ ]n-r
= [ (1)(1- )(1- )…(1- )] (1- )n(1- )-r------------(1)
In letting n → ∞ and p → 0 in such a way that np = c remains fixed, the terms (1- ), (1- ), …(1- ) all
-r n c
approach to 1, as does (1- ) . we know (1- ) → e as n →∞.
Thus limiting form of equation (1) is P n (t) = which is Poisson distribution.
Some continuous distributions:
The uniform distribution: A random variable X is said to follow continuous uniform distribution in an interval
[a, b] if its density function is constant over the entire range of X. I.e. f(x) = K, a ≤ x ≤ b.
We know that =1
→ =1
→ K[b-a] = 1
→K=
Therefore f(x) = a ≤ x ≤ b.
Then p(c ≤ x ≤ d) = = .
The exponential distribution: In this random variable defined as “the time between the occurrences”. It has the
density function f(x) = e-x when x ≥ 0.
0 otherwise, where is real positive constant.
Gamma distribution: Gamma density function is f (t) = (t)r-1e-r, t > 0
= 0 otherwise.
The parameters are r > 0 and > 0.
The parameter r is called shape parameter,
And the parameter is called scale parameter.
The Weibull distribution: The important usage of Weibull distribution is that it gives a fairly accurate
approximation to the probability law of many random variables. Time to failure in electrical and mechanical
components is one of the applications of this distribution. The probability density function of Weibull
distribution is
-1
f (x) = [ exp[-( )], x ≥ γ,
= 0, otherwise.
Introduction
Constructive time expended constitutes a small percentage of the total time spent by us on various
activities. Severe traffic congestions and bottlenecks eat away a major chunk of time while travelling. Due to a
13
4. A Case Study of Bank Queueing Model
boom in accounts and growing population, a visit to the bank or post office results in a lot of time wastage as a
huge number of customers are waiting to be serviced. Super markets are usually overcrowded which leads to a
delay in making day to day purchases. In general, customers get irate when there is delay in getting their
respective works completed. On the other hand, employees providing service to customers also dread this
situation as it may lead to loss of business. This situation is a direct result of lack of organized service providing
mechanism. A detailed study of the queuing systems is a rational way of providing solution to this problem.
Such a study helps us understand the size of the queue, behaviour of the customers in the queue, system capacity,
arrival process, service availability, service process in the system. This analysis helps in providing valuable
inputs to the management to take remedial measures.
A queue is a waiting line. Queueing theory is mathematical theory of waiting lines. The customers
arriving at a queue may be calls, messages, persons, machines, tasks etc. we identify the unit demanding service,
whether it is human or otherwise, as a customer. The unit providing service is known as server. For example (1)
vehicles requiring service wait for their turn in a service center. (2) Patients arrive at a hospital for treatment. (3)
Shoppers are face with long billing queues in super markets. (4) Passengers exhaust a lot of time from the time
they enter the airport starting with baggage, security checks and boarding.
Queueing theory studies arrival process in to the system, waiting time in the queue, waiting time in the system
and service process. And in general we observe the following type of behavior with the customer in the queue.
They are
Balking of Queue: Some customers decide not to join the queue due to their observation related to the long
length of queue, in sufficient waiting space. This is called Balking.
Reneging of Queue: This is the about impatient customers. Customers after being in queue for some time, few
customers become impatient and may leave the queue. This phenomenon is called as Reneging of Queue.
Jockeying of Queue: Jockeying is a phenomenon in which the customers move from one queue to another
queue with hope that they will receive quicker service in the new position.
History In Telephone system we provide communication paths between pairs of customers on demand.
The permanent communication path between two telephone sets would be expensive and impossible. So to build
a communication path between a pair of customers, the telephone sets are provided a common pool, which is
used by telephone set whenever required and returns back to pool after completing the call. So automatically
calls experience delays when the server is busy. To reduce the delay we have to provide sufficient equipment.
To study how much equipment must be provided to reduce the delay we have to analyse queue at the pool. In
1908 Copenhagen Telephone Company requested Agner K.Erlang to work on the holding times in a telephone
switch. Erlang’s task can be formulated as follows. What fraction of the incoming calls is lost because of the
busy line at the telephone exchange? First we should know the inter arrival and service time distributions. After
collecting data, Erlang verified that the Poisson process arrivals and exponentially distributed service were
appropriate mathematical assumptions. He had found steady state probability that an arriving call is lost and the
steady state probability that an arriving customer has to wait. Assuming that arrival rate is , service rate is µ
and he derived formulae for loss and deley.
(1) The probability that an arriving call is lost (which is known as Erlang B-formula or loss formula).
P n= B (n,)
(2) The probability that an arriving has to wait (which is known as Erlang C-formula or deley formula).
P n=
Erlang’s paper “On the rational determination of number of circuits” deals with the calculation of the optimum
number of channels so as to reduce the probability of loss in the system.
Whole theory started with a congestion problem in tele-traffic. The application of queueing theory scattered
many areas. It include not only tele-communications but also traffic control, hospitals, military, call-centers,
supermarkets, computer science, engineering, management science and many other areas.
Important concepts in Queueing theory
Little law
One of the feet of queueing theory is the formula Little law. This is
N = T
This formula applies to any system in equilibrium (steady state).
Where is the arrival rate
T is the average time a customer spends in the system
N is the average number of customers in the system
Little law can be applied to the queue itself.
I.e. N q = T q
14
5. A Case Study of Bank Queueing Model
Where is the arrival rate
T q the average time a customer spends in the queue
N q is the average number of customers in the queue
II. CLASSIFICATION OF QUEUING SYSTEMS
Input process
If the occurrence of arrivals and the offer of service strictly follow some schedule, a queue can be
avoided. In practice this is not possible for all systems. Therefore the best way to describe the input process is
by using random variables which we can define as “Number of arrivals during the time interval” or “The time
interval between successive arrivals”.
If the arrivals are in group or bulk, then we take size of the group as random variable. In most of the queueing
models our aim is to find relevant probability distribution for number of customers in the queue or the number
of customers in the system which is followed by assumed random variable.
Service Process
Random variables are used to describe the service process which we can define as “service time” or
“no of servers” when necessary. Sometimes service also be in bulk. For instance, the passengers boarding a
vehicle and students understanding the lesson etc..
Number of servers
Queueing system may have Single server like hair styling salon or multiple servers like hospitals. For a
multiple server system, the service may be with series of servers or with c number of parallel servers. In a bank
withdrawing money is the example of former one where each customer must take a token and then move to the
cashier counter. Railway reservation office with c independent single channels is the example of parallel server
system which can serve customers simultaneously.
System capacity
Sometimes there is finite waiting space for the customers who enter the queueing system. This type of
queueing systems are referred as finite queueing system.
Queue discipline
This is the rule followed by server in accepting the customers to give service. The rules are
FCFS (First come first served).
LCFS (Last come first served).
Random selection (RS).
Priority will be given to some customers.
General discipline (GD).
Kendall’s notation
Notation for describing all characteristics above of a queueing model was first suggested by David G
Kendall in 1953.
The notation is with alphabet separated by slashes.
o A/B/X/Y/Z
Where A indicates the distribution of inter arrival times
B denotes the distribution of the service times
X is the capacity of the system
Y denotes number of sources
Z refers to the service discipline
Examples of queueing systems that can be defined with this convention are M/M/1
M/D/n
G/G/n
Where M stands for Markov
D stands for deterministic
G stands for general
Definition- state dependent service: The situation in which service depends on the number of customers
waiting is referred to as state dependent service.
A survey was conducted in relation to the operations of Andhra bank, JNTUH, Hyderabad. The bank
provides service to various types of customers on daily basis. The mean deviation of arrival rate of customers in
each week is different. In the first week the bank has 6 customers/minute. For the second week it was 4.662
customers/minute. For third and fourth weeks 3.666 customers/minute, and 3 customers/minute respectively.
Since the service is state dependent service, the time taken by employee to complete one service on average is
different in different weeks. By observations and data from the management of the Bank the time to serve one
customer on average is listed below:
For first week ----- 2 minutes/customer
15
6. A Case Study of Bank Queueing Model
For second week ------ 2.5 minutes/customer
For third week ------ 2.5 minutes/customer
For fourth week ----- 3 minutes/customer
In this paper we consider the application of Gamma distribution which involves the Gamma function.
Gamma function: The Gamma function was first introduced by Leonhard Euler. It is the solution of following
interpolation problem. “Find a smooth curve that connects the points (x, y) given by y = (x-1)!” It is easy to
interpolate factorial function to non negative values. However, the Gamma function is the solution to provide a
formula that describes the resulting curve. It is defined by the use of tools integrals from calculus.
Gamma function is defined for all complex numbers except for non positive integers as
(z) = with positive real part.
(z) = (z-1)! when z is a positive integer.
Gamma distribution: Gamma distribution is a general type of statistical distribution with two parameters r, and
density function f (t) = (t)r-1e-r, t > 0
= 0 otherwise.
The parameter r > 0 is called shape parameter.
And the parameter >0 is called scale parameter.
Since arrivals to the bank are time oriented occurrences, they follow Poisson probability distribution.
And as arrival times in the Poisson process have Gamma distribution, we can use gamma probability
distribution to the arrivals of Andhra Bank. Considering the “time to complete the service” as random variable,
we calculate the probability of completing the service in time t for different weeks.
Calculating the probabilities by using probability density function of gamma distribution
Graph of pdf:
16
7. A Case Study of Bank Queueing Model
Let us take time t = 5 minutes, then time taken to complete the service within 5 minutes for different weeks is
tabulated.
17