In this paper, attempt to study effects of extreme observations on two estimators of finite
population total theoretically and by simulation is made. We compare the ratio estimate with the
local linear polynomial estimate of finite population total given different finite populations. Both
classical and the non parametric estimator based on the local linear polynomial produce good
results when the auxiliary and the study variables are highly correlated. It is however noted that
in the presence of outlying observations the local linear polynomial performs better with respect
to design mean square error (MSE) in all the artificial populations generated.
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTIONIJESM JOURNAL
This document discusses the use of differential operators to solve a wage equation and analyze the stability of the resulting wage function. The wage equation incorporates speculative parameters that dictate expectations. Depending on the values of the parameters, the wage function can exhibit either stability, converging to an equilibrium wage rate exponentially, or instability, fluctuating periodically with decreasing amplitude. Differential operators provide a simple method for solving the wage equation compared to other techniques. The stability of the wage function solution is then analyzed and interpreted.
This document describes various statistical validation methods used to analyze finite sample data, including measures of central tendency, dispersion, skewness, correlation, and regression. It also discusses different types of statistical tests like the t-test, F-test, and ANOVA that are used to test hypotheses and determine statistical significance. The document provides examples and formulas for calculating various statistical measures and performing tests on sample data sets.
Joint Contributions of Martin B. Wilk and Ram GnanadesikanSunitha Flowerhill
Prepared by Sunitha Flowerhill
For Big Data Optimization, Fall 2017
Probability, Quantiles, Percentiles, Statistics, Early Data Science, Gamma Distribution, Analytics, Plotting Methods
Cointegration of Interest Rate- The Case of Albaniarahulmonikasharma
This document discusses various statistical methods for analyzing the long-term relationship between non-stationary time series, known as cointegration. It applies these methods to analyze the long-term relationship between interest rates on credit and deposits in Albania. The two-step Engle-Granger procedure finds evidence of cointegration between the two interest rate series. The error correction model also supports long-term cointegration. Overall, the study finds the relationship between interest rates on credit and deposits in Albania is stable and sustainable in the long run.
APPLICATION OF VARIABLE FUZZY SETS IN THE ANALYSIS OF SYNTHETIC DISASTER DEGR...ijfls
This paper proposes a fuzzy multi -criteria decision making method for disaster risk analysis. It considers the application of “-cut” and “fuzzy arithmetic operations” to rank the fuzzy numbers describing linguistic variables. This article studies the use of variable fuzzy sets and is applied to study disaster area of Nagapattinam district under north- east monsoon rainfall. The relative and synthetic disaster degree is obtained for the places under study.
The effectiveness of various analytical formulas for
estimating R2 Shrinkage in multiple regression analysis was
investigated. Two categories of formulas were identified estimators
of the squared population multiple correlation coefficient (
2
)
and those of the squared population cross-validity coefficient
(
2 c
). The authors compeered the effectiveness of the analytical
formulas for determining R2 shrinkage, with squared population
multiple correlation coefficient and number of predictors after
finding all combination among variables, maximum correlation
was selected to computed all two categories of formulas. The
results indicated that Among the 6 analytical formulas designed to
estimate the population
2
, the performance of the (Olkin & part
formula-1 for six variable then followed by Burket formula &
Lord formula-2 among the 9 analytical formulas were found to be
most stable and satisfactory.
This document discusses key concepts in statistics. It defines statistics as the science of making decisions and drawing conclusions from data with uncertainty. It also defines key terms like population, sample, parameter, variable, observation, and data. The document outlines different types of variables, including quantitative and qualitative variables. It also describes different scales of measurement used in statistics, from nominal to ratio scales.
DIFFERENTIAL OPERATORS AND STABILITY ANALYSIS OF THE WAGE FUNCTIONIJESM JOURNAL
This document discusses the use of differential operators to solve a wage equation and analyze the stability of the resulting wage function. The wage equation incorporates speculative parameters that dictate expectations. Depending on the values of the parameters, the wage function can exhibit either stability, converging to an equilibrium wage rate exponentially, or instability, fluctuating periodically with decreasing amplitude. Differential operators provide a simple method for solving the wage equation compared to other techniques. The stability of the wage function solution is then analyzed and interpreted.
This document describes various statistical validation methods used to analyze finite sample data, including measures of central tendency, dispersion, skewness, correlation, and regression. It also discusses different types of statistical tests like the t-test, F-test, and ANOVA that are used to test hypotheses and determine statistical significance. The document provides examples and formulas for calculating various statistical measures and performing tests on sample data sets.
Joint Contributions of Martin B. Wilk and Ram GnanadesikanSunitha Flowerhill
Prepared by Sunitha Flowerhill
For Big Data Optimization, Fall 2017
Probability, Quantiles, Percentiles, Statistics, Early Data Science, Gamma Distribution, Analytics, Plotting Methods
Cointegration of Interest Rate- The Case of Albaniarahulmonikasharma
This document discusses various statistical methods for analyzing the long-term relationship between non-stationary time series, known as cointegration. It applies these methods to analyze the long-term relationship between interest rates on credit and deposits in Albania. The two-step Engle-Granger procedure finds evidence of cointegration between the two interest rate series. The error correction model also supports long-term cointegration. Overall, the study finds the relationship between interest rates on credit and deposits in Albania is stable and sustainable in the long run.
APPLICATION OF VARIABLE FUZZY SETS IN THE ANALYSIS OF SYNTHETIC DISASTER DEGR...ijfls
This paper proposes a fuzzy multi -criteria decision making method for disaster risk analysis. It considers the application of “-cut” and “fuzzy arithmetic operations” to rank the fuzzy numbers describing linguistic variables. This article studies the use of variable fuzzy sets and is applied to study disaster area of Nagapattinam district under north- east monsoon rainfall. The relative and synthetic disaster degree is obtained for the places under study.
The effectiveness of various analytical formulas for
estimating R2 Shrinkage in multiple regression analysis was
investigated. Two categories of formulas were identified estimators
of the squared population multiple correlation coefficient (
2
)
and those of the squared population cross-validity coefficient
(
2 c
). The authors compeered the effectiveness of the analytical
formulas for determining R2 shrinkage, with squared population
multiple correlation coefficient and number of predictors after
finding all combination among variables, maximum correlation
was selected to computed all two categories of formulas. The
results indicated that Among the 6 analytical formulas designed to
estimate the population
2
, the performance of the (Olkin & part
formula-1 for six variable then followed by Burket formula &
Lord formula-2 among the 9 analytical formulas were found to be
most stable and satisfactory.
This document discusses key concepts in statistics. It defines statistics as the science of making decisions and drawing conclusions from data with uncertainty. It also defines key terms like population, sample, parameter, variable, observation, and data. The document outlines different types of variables, including quantitative and qualitative variables. It also describes different scales of measurement used in statistics, from nominal to ratio scales.
This document discusses different types of statistics used in data analysis:
Descriptive statistics aim to quantitatively summarize a data set and are used to give an overall sense of the data being analyzed. An example is providing characteristics like average age in medical research studies.
Inferential statistics are used to make inferences about an unknown population based on a sample. There are different schools of thought on justifying statistical inference based on probability models.
Regression analysis models the relationship between a dependent variable and one or more independent variables. It estimates how the dependent variable changes with the independent variables and is widely used for prediction. Regression models involve unknown parameters, independent variables, and a dependent variable related by a linear or nonlinear function.
ORDINARY LEAST SQUARES REGRESSION OF ORDERED CATEGORICAL DATA- INBeth Larrabee
- The document describes a simulation study that evaluated the performance of ordinary least squares regression (OLSLR) for analyzing ordered categorical response (OCR) variables.
- Across different frequency distributions and numbers of categories for the OCR, the empirical type I error rate for OLSLR was close to the nominal 0.05 level.
- Empirical power for OLSLR increased as the number of categories in the OCR increased, but this trend slowed for OCRs with 5 or more categories. For most scenarios, OLSLR power was similar to probit regression power.
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...IJESM JOURNAL
In this paper, a second order wage equation is developed and solved by the method of variation of parameters. The subsequent wage function is then analyzed and interpreted for stability. Speculative parameters, which operate freely dictating employers’ expectations, are included in modeling this equation. The variation of these parameters causes both stability and instability of the wage function depending on circumstances. Where the wage function is exponential, asymptotic stability towards the equilibrium wage rate is observed but where it consists of both exponential and periodic factors, the time path shows periodic fluctuations with successive cycles giving smaller amplitudes until the ripples die naturally. It has been realized that where the wage rate is determined by free market forces of demand and supply, volatility in wage rate may be observed if not controlled. This may increase uncertainties and cause anxiety about investment and employment in the economy. The paper therefore proposes government intervention by creating a middle path in which wage rate is allowed to oscillate freely within a narrow band managed by employers in consultation with the workers under the watch of the government.
This book provides a comprehensive overview of modern statistical methods aimed at overcoming issues that arise when standard statistical assumptions like normality and equal variance are violated. It introduces robust techniques for estimating location, testing hypotheses, computing confidence intervals, comparing groups, detecting outliers, and linear regression. The book is intended to bridge the gap between current robust method developments and practical application, offering an intuitive understanding of why and how standard techniques can mislead and the advantages of modern robust alternatives. It assumes a basic understanding of statistical concepts and methods.
This document lists 50 publications from international journals related to applied mathematics and mechanics. The publications cover a range of topics including numerical solutions of differential equations, groundwater flow modeling, fluid flow through porous media, calcium diffusion modeling, and more. Many of the publications involve collaboration between researchers at Saurashtra University.
Statistics is the science of collecting, organizing, analyzing, and drawing conclusions from data. Biostatistics applies statistical methods to biological topics like public health, clinical trials, genetics, and ecology. Descriptive statistics summarizes and presents data, while inferential statistics allows generalizing from samples to populations through hypothesis testing, determining relationships among variables, and making predictions. Key concepts include data, variables, populations, samples, measurement scales, and sampling methods. Common graphs for presenting data include histograms, bar charts, line graphs, and pie charts.
4. Biostatistics graphical representation histogram and polygonSudhakar Khot
The data may be represented textually or graphically. The graphical presentation includes the arrangement of data in 2D or 3D figures. Bar graph, histogram, line graph, pie graph are some of the examples of graphical presentations.
This presentation explains the characteristics of Histogram
This document discusses the application of biostatistics in a case study. It summarizes key statistical concepts used in the paper such as descriptive statistics like arithmetic mean and standard deviation to describe features of the data. Inferential statistics like the Spearman rank correlation, t-test are used to test hypotheses and draw inferences about population parameters from sample statistics. These statistical analyses help evaluate the objectives of preliminary data analysis which are to edit, describe and summarize key features of the data.
The document discusses mixed models, which contain both fixed and random effects. Fixed effects have all possible levels included in the study, while random effects are a random sample from the total population. The mixed model is represented as Y = Xβ + Zγ + ε, where β are fixed effects, X are fixed effect variables, Z are random effects, γ are random effect parameters, and ε is the error term. Mixed models can model both fixed and random effects, account for correlation in errors, and handle missing data. They provide correct standard errors compared to general linear models (GLMs). Model fitting involves likelihood ratio tests and information criteria to select the best fitting model.
This document provides an introduction to a course on statistical methods in nursing. It outlines the general objectives of understanding the nature and definition of statistics, its brief historical development, distinguishing samples from populations, types of variables, and the importance of statistics in research. It includes a pre-test to assess students' basic knowledge of statistical concepts before beginning the lessons.
Application of Weighted Least Squares Regression in Forecastingpaperpublications3
Abstract: This work models the loss of properties from fire outbreak in Ogun State using Simple Weighted Least Square Regression. The study covers (secondary) data on fire outbreak and monetary value of properties loss across the twenty (20) Local Government Areas of Ogun state for the year 2010. Data collected were analyzed electronically using SPSS 21.0. Results from the analysis reveal that there is a very strong positive relationship between the number of fire outbreak and the loss of properties; this relationship is significant. Fire outbreak exerts significant influence on loss of properties and it accounts for approximately 91.2% of the loss of properties in the state.
5. Biostatistics central tendency mean, median, mode for ungrouped dataSudhakar Khot
This document discusses measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure. The mean is the average value and is calculated by summing all values and dividing by the total number of values. The median is the middle value when data is arranged in order. The mode is the most frequently occurring value in a data set. An example calculating the mean, median, and mode for a data set of lemon weights is provided and solved step-by-step.
This document provides an overview of descriptive statistics as taught in a statistics course (STS 102) at Crescent University, Nigeria. It covers topics like statistical data collection methods, presentation of data through tables and graphs, measures of central tendency and dispersion. The key objectives of descriptive statistics are to summarize and describe characteristics of data through measures, charts and diagrams. Inferential statistics is also introduced as a way to make inferences about populations based on samples.
STATISTICAL TOOLS USED IN ANALYTICAL CHEMISTRYkeerthana151
This document provides an overview of statistical concepts related to analytical chemistry. It defines key terms like error, bias, accuracy, and precision. It discusses measures of central tendency, statistical process control charts, and various statistical tests. It provides examples of calculating Kjeldahl nitrogen and describes different types of control charts and statistical tests like t-tests, F-tests, linear regression, and analysis of variance. It lists several references for further information on statistics topics.
This document provides an introduction to biostatistics. It discusses how statistics are important for precision in science and medicine. Biostatistics involves applying statistical tools to biological data from fields like medicine. Some key applications of biostatistics include defining normal ranges, comparing treatment effectiveness, and identifying disease associations. The document also outlines common statistical terms, data sources and types, methods for presenting data, measures of central tendency and variability.
This document discusses the different meanings and definitions of statistics. It explains that statistics has three different meanings: (1) plural sense referring to numerical facts and figures collected systematically, (2) singular sense referring to the science of collecting, analyzing, and presenting numerical data, and (3) plural of the word "statistic" referring to numerical quantities calculated from samples. The document also provides several definitions of statistics from different authors, describing it as the science of collecting, organizing, and interpreting quantitative data.
Influence of porosity and electrokinetic effects on flow through microchannelsIJESM JOURNAL
Influence of electrostatic potential and porosity on flow through microchannels is analysed. Solving Navier Stokes equations effects of porosity and zeta potential on flow is analysed under various operating conditions.
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...IJESM JOURNAL
The stress-strength model describes the life of a component which has a random strength X and is subjected to random stress Y, in the context of reliability. The component will function satisfactorily whenever X>Y and it fails at the instant the stress applied to it exceeds the strength. R=P(Y<X) is a measure of component reliability .In this paper, we obtain semi parametric estimators of the reliability under stress- strength model for the Power function distribution under complete and censored samples. We illustrate the performance of the estimators using a simulation study.
An automaton is a mathematical model of computing. It is an abstract model of digital computer. Classical automata are formal models of computing with values. Fuzzy automata are generalizations of classical automata where the knowledge about the systems next state is vague or uncertain. It is worth noting that like classical automata, fuzzy automata can only process strings of input symbols. Therefore, such fuzzy automata are still (abstract) devices for computing with values, although a certain vagueness or uncertainty is involved in the process of computation value. In this paper, we observe that, the fuzzy automata architecture is isomorphic to the fuzzy graph model and using fuzzy graph we will describe the three basic models of fuzzy automata.
Comparative study of Terminating Newton Iterations: in Solving ODEsIJESM JOURNAL
This document summarizes a study on the strategies used to terminate Newton iterations in solving ordinary differential equations (ODEs) numerically. It discusses two main strategies currently used in the Matlab code ode15s: 1) a relative displacement test that terminates iterations when the difference between successive iterates is small, and 2) a test that terminates iterations when the estimated convergence rate is high enough. Numerical experiments on test problems reveal that the convergence rate test is often more stringent, requiring more iterations to satisfy its condition.
This document discusses different types of statistics used in data analysis:
Descriptive statistics aim to quantitatively summarize a data set and are used to give an overall sense of the data being analyzed. An example is providing characteristics like average age in medical research studies.
Inferential statistics are used to make inferences about an unknown population based on a sample. There are different schools of thought on justifying statistical inference based on probability models.
Regression analysis models the relationship between a dependent variable and one or more independent variables. It estimates how the dependent variable changes with the independent variables and is widely used for prediction. Regression models involve unknown parameters, independent variables, and a dependent variable related by a linear or nonlinear function.
ORDINARY LEAST SQUARES REGRESSION OF ORDERED CATEGORICAL DATA- INBeth Larrabee
- The document describes a simulation study that evaluated the performance of ordinary least squares regression (OLSLR) for analyzing ordered categorical response (OCR) variables.
- Across different frequency distributions and numbers of categories for the OCR, the empirical type I error rate for OLSLR was close to the nominal 0.05 level.
- Empirical power for OLSLR increased as the number of categories in the OCR increased, but this trend slowed for OCRs with 5 or more categories. For most scenarios, OLSLR power was similar to probit regression power.
APPLICATION OF THE METHOD OF VARIATION OF PARAMETERS: MATHEMATICAL MODEL FOR ...IJESM JOURNAL
In this paper, a second order wage equation is developed and solved by the method of variation of parameters. The subsequent wage function is then analyzed and interpreted for stability. Speculative parameters, which operate freely dictating employers’ expectations, are included in modeling this equation. The variation of these parameters causes both stability and instability of the wage function depending on circumstances. Where the wage function is exponential, asymptotic stability towards the equilibrium wage rate is observed but where it consists of both exponential and periodic factors, the time path shows periodic fluctuations with successive cycles giving smaller amplitudes until the ripples die naturally. It has been realized that where the wage rate is determined by free market forces of demand and supply, volatility in wage rate may be observed if not controlled. This may increase uncertainties and cause anxiety about investment and employment in the economy. The paper therefore proposes government intervention by creating a middle path in which wage rate is allowed to oscillate freely within a narrow band managed by employers in consultation with the workers under the watch of the government.
This book provides a comprehensive overview of modern statistical methods aimed at overcoming issues that arise when standard statistical assumptions like normality and equal variance are violated. It introduces robust techniques for estimating location, testing hypotheses, computing confidence intervals, comparing groups, detecting outliers, and linear regression. The book is intended to bridge the gap between current robust method developments and practical application, offering an intuitive understanding of why and how standard techniques can mislead and the advantages of modern robust alternatives. It assumes a basic understanding of statistical concepts and methods.
This document lists 50 publications from international journals related to applied mathematics and mechanics. The publications cover a range of topics including numerical solutions of differential equations, groundwater flow modeling, fluid flow through porous media, calcium diffusion modeling, and more. Many of the publications involve collaboration between researchers at Saurashtra University.
Statistics is the science of collecting, organizing, analyzing, and drawing conclusions from data. Biostatistics applies statistical methods to biological topics like public health, clinical trials, genetics, and ecology. Descriptive statistics summarizes and presents data, while inferential statistics allows generalizing from samples to populations through hypothesis testing, determining relationships among variables, and making predictions. Key concepts include data, variables, populations, samples, measurement scales, and sampling methods. Common graphs for presenting data include histograms, bar charts, line graphs, and pie charts.
4. Biostatistics graphical representation histogram and polygonSudhakar Khot
The data may be represented textually or graphically. The graphical presentation includes the arrangement of data in 2D or 3D figures. Bar graph, histogram, line graph, pie graph are some of the examples of graphical presentations.
This presentation explains the characteristics of Histogram
This document discusses the application of biostatistics in a case study. It summarizes key statistical concepts used in the paper such as descriptive statistics like arithmetic mean and standard deviation to describe features of the data. Inferential statistics like the Spearman rank correlation, t-test are used to test hypotheses and draw inferences about population parameters from sample statistics. These statistical analyses help evaluate the objectives of preliminary data analysis which are to edit, describe and summarize key features of the data.
The document discusses mixed models, which contain both fixed and random effects. Fixed effects have all possible levels included in the study, while random effects are a random sample from the total population. The mixed model is represented as Y = Xβ + Zγ + ε, where β are fixed effects, X are fixed effect variables, Z are random effects, γ are random effect parameters, and ε is the error term. Mixed models can model both fixed and random effects, account for correlation in errors, and handle missing data. They provide correct standard errors compared to general linear models (GLMs). Model fitting involves likelihood ratio tests and information criteria to select the best fitting model.
This document provides an introduction to a course on statistical methods in nursing. It outlines the general objectives of understanding the nature and definition of statistics, its brief historical development, distinguishing samples from populations, types of variables, and the importance of statistics in research. It includes a pre-test to assess students' basic knowledge of statistical concepts before beginning the lessons.
Application of Weighted Least Squares Regression in Forecastingpaperpublications3
Abstract: This work models the loss of properties from fire outbreak in Ogun State using Simple Weighted Least Square Regression. The study covers (secondary) data on fire outbreak and monetary value of properties loss across the twenty (20) Local Government Areas of Ogun state for the year 2010. Data collected were analyzed electronically using SPSS 21.0. Results from the analysis reveal that there is a very strong positive relationship between the number of fire outbreak and the loss of properties; this relationship is significant. Fire outbreak exerts significant influence on loss of properties and it accounts for approximately 91.2% of the loss of properties in the state.
5. Biostatistics central tendency mean, median, mode for ungrouped dataSudhakar Khot
This document discusses measures of central tendency including the mean, median, and mode. It provides definitions and formulas for calculating each measure. The mean is the average value and is calculated by summing all values and dividing by the total number of values. The median is the middle value when data is arranged in order. The mode is the most frequently occurring value in a data set. An example calculating the mean, median, and mode for a data set of lemon weights is provided and solved step-by-step.
This document provides an overview of descriptive statistics as taught in a statistics course (STS 102) at Crescent University, Nigeria. It covers topics like statistical data collection methods, presentation of data through tables and graphs, measures of central tendency and dispersion. The key objectives of descriptive statistics are to summarize and describe characteristics of data through measures, charts and diagrams. Inferential statistics is also introduced as a way to make inferences about populations based on samples.
STATISTICAL TOOLS USED IN ANALYTICAL CHEMISTRYkeerthana151
This document provides an overview of statistical concepts related to analytical chemistry. It defines key terms like error, bias, accuracy, and precision. It discusses measures of central tendency, statistical process control charts, and various statistical tests. It provides examples of calculating Kjeldahl nitrogen and describes different types of control charts and statistical tests like t-tests, F-tests, linear regression, and analysis of variance. It lists several references for further information on statistics topics.
This document provides an introduction to biostatistics. It discusses how statistics are important for precision in science and medicine. Biostatistics involves applying statistical tools to biological data from fields like medicine. Some key applications of biostatistics include defining normal ranges, comparing treatment effectiveness, and identifying disease associations. The document also outlines common statistical terms, data sources and types, methods for presenting data, measures of central tendency and variability.
This document discusses the different meanings and definitions of statistics. It explains that statistics has three different meanings: (1) plural sense referring to numerical facts and figures collected systematically, (2) singular sense referring to the science of collecting, analyzing, and presenting numerical data, and (3) plural of the word "statistic" referring to numerical quantities calculated from samples. The document also provides several definitions of statistics from different authors, describing it as the science of collecting, organizing, and interpreting quantitative data.
Influence of porosity and electrokinetic effects on flow through microchannelsIJESM JOURNAL
Influence of electrostatic potential and porosity on flow through microchannels is analysed. Solving Navier Stokes equations effects of porosity and zeta potential on flow is analysed under various operating conditions.
SEMI-PARAMETRIC ESTIMATION OF Px,y({(x,y)/x >y}) FOR THE POWER FUNCTION DISTR...IJESM JOURNAL
The stress-strength model describes the life of a component which has a random strength X and is subjected to random stress Y, in the context of reliability. The component will function satisfactorily whenever X>Y and it fails at the instant the stress applied to it exceeds the strength. R=P(Y<X) is a measure of component reliability .In this paper, we obtain semi parametric estimators of the reliability under stress- strength model for the Power function distribution under complete and censored samples. We illustrate the performance of the estimators using a simulation study.
An automaton is a mathematical model of computing. It is an abstract model of digital computer. Classical automata are formal models of computing with values. Fuzzy automata are generalizations of classical automata where the knowledge about the systems next state is vague or uncertain. It is worth noting that like classical automata, fuzzy automata can only process strings of input symbols. Therefore, such fuzzy automata are still (abstract) devices for computing with values, although a certain vagueness or uncertainty is involved in the process of computation value. In this paper, we observe that, the fuzzy automata architecture is isomorphic to the fuzzy graph model and using fuzzy graph we will describe the three basic models of fuzzy automata.
Comparative study of Terminating Newton Iterations: in Solving ODEsIJESM JOURNAL
This document summarizes a study on the strategies used to terminate Newton iterations in solving ordinary differential equations (ODEs) numerically. It discusses two main strategies currently used in the Matlab code ode15s: 1) a relative displacement test that terminates iterations when the difference between successive iterates is small, and 2) a test that terminates iterations when the estimated convergence rate is high enough. Numerical experiments on test problems reveal that the convergence rate test is often more stringent, requiring more iterations to satisfy its condition.
Finite Element Approach to the Solution of Fourth order Beam Equation:IJESM JOURNAL
Finite element method is a class of mathematical tool which approximates solutions to initial and
boundary value problems. Finite element, basic functions, stiffness matrices,systems of ordinary
differential equations and hence approximate solutions of partial differential equations which
involves rendering the partial differential equation into system of ordinary differential equations.
The ordinary differential equations are then numerically integrated.
We present a finite element approach in solving fourth order linear beam equation:
2 , tt xxxx u c u f x t , which arises in model studies of building structures wave theory.
In physical application of waves in building structures, coefficient 2 c , has the meaning of flexural
rigidity per linear mass density and f x,t external forcing term. In this paper, we give a solution
to the beam equation with 2 c 139and f x, t 100.
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symm...theijes
This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
- Analysis of variance (ANOVA) can be used to test if there are significant differences between the means of three or more populations. It tests the null hypothesis that all population means are equal.
- Key terms in ANOVA include response variable, factor, treatment, and level. A factor is the independent variable whose levels make up the treatments being compared.
- ANOVA partitions total variation in data into variations due to treatments and random error. If the treatment variation is large compared to error variation, the null hypothesis of equal means is rejected.
This document provides an introduction to statistics and biostatistics in healthcare. It defines statistics and biostatistics, outlines the basic steps of statistical work, and describes different types of variables and methods for collecting data. The document also discusses different types of descriptive and inferential statistics, including measures of central tendency, dispersion, frequency, t-tests, ANOVA, regression, and different types of plots/graphs. It explains how statistics is used in healthcare for areas like disease burden assessment, intervention effectiveness, cost considerations, evaluation frameworks, health care utilization, resource allocation, needs assessment, quality improvement, and product development.
Science involves researching and trying to explain the natural world through direct observation or using tools. The scientific method includes stating the problem, researching existing knowledge, developing a hypothesis, designing experiments to test the hypothesis, collecting and analyzing data, organizing the results, and sharing findings with others. A hypothesis or theory must be testable and able to be modified as new evidence is found. While science seeks to understand nature, it does not determine how its findings should be applied - that role belongs to society.
Marginal Regression for a Bi-variate Response with Diabetes Mellitus Studytheijes
In this paper, we have developed an “A Bivariate response” model to determine for „A Diabetic Mellitus‟ Study affects large number of people of all social conditions throughout the world. Continuous to grow despite of existing advances in the past few years in virtually every fled of diabetes research and in-protect care for improved treatment. This is sometimes accompanied by symptoms of serve thirst. Profuse urination, weight loss and stopper. We tested by SPSS software by taking 200 samples and using Logistic Regression to estimate the relationship between the response probability whether a diabetic patient had B.P. or not.
This document discusses a study that compares the performance of different statistical learning methods (LASSO, classification trees, random forests, support vector machines) across different sample sizes using a dataset on high school dropout rates. The study finds that the statistical learning methods generally perform better than traditional regression at predicting dropout across different sample sizes. It also finds that the prediction models and errors produced differ between learning methods and across sample sizes for each method. The document outlines how each statistical learning method works in order to help researchers apply these advanced techniques to their own work.
This document defines common terminology used in biostatistics, including:
- Statistics which involves collecting, analyzing, and interpreting numerical data to draw conclusions. Biostatistics applies statistical methods to biological and health data.
- Attributes, variables, data, descriptive statistics, and inferential statistics which are all key concepts in statistical analysis. Attributes describe characteristics, variables can vary, data are observations, and descriptive and inferential statistics analyze and draw conclusions from data.
- Other terms defined are parameters, population, sample, discrete/continuous variables, dependent/independent variables which describe characteristics of data, and confounding variables which can interfere with relationships between variables.
Running head DATA ANALYSIS1DATA ANALYSIS 7Dat.docxhealdkathaleen
Running head: DATA ANALYSIS 1
DATA ANALYSIS 7
Data Analysis
Tammie Witcher
Columbia Southern University
Data Analysis: Descriptive Statistics and Assumption Testing
Details of how data is collected and analyzed is presented here. The research that led to the achievement of Sun Coast objectives was done using quantitative research methods since they offer detailed insights pertaining to the study. Research design is the specific type of study that one would conduct and is usually consistent with one’s philosophical worldview and the methodological approach the researcher chooses
Correlation: Descriptive Statistics and Assumption Testing
Frequency distribution table
Histogram.
Descriptive statistics table.
Measurement scale. Causal-comparative research methods which was sometimes combined with the descriptive statistics one (Creswell & Creswell, 2018). The former was used to find the relationship between dependent and independent variables after the occurrence of any action in Sun Coast.
Measure of central tendency. The measure of central tendency majored on the mode even though both mean and median were employed for the frequency table to justify various aspects tested in the research.
Evaluation. Sun Coast’s leadership and other business objectives could render descriptive statistics significant since the researchers could use the past figures to analyze the current ones and make a sound forecast of future organizational performance.
Simple Regression: Descriptive Statistics and Assumption Testing
Frequency distribution table.
Histogram.
Descriptive statistics table.
Measurement scale. Regression analysis procedure would be appropriate for RQ3 since the variable, DB levels of work would be predicted before placing employees on-site for future contracts. There is no independent sample among those provided by this RQ.
Measure of central tendency. The measure of central tendency majored on the mode even though both mean and median were employed for the frequency table to justify various aspects tested in the research.
Evaluation. DB levels of work would be predicted before placing employees on-site for future contracts. There is no independent sample among those provided by this RQ.
Multiple Regression: Descriptive Statistics and Assumption Testing
Frequency distribution table.
Histogram.
Descriptive statistics table.
Measurement scale. The measurement for this case applied the regression procedure to use to test different hypotheses since the interest is whether a relationship exists between an independent variable (IV) and dependent variable (DV). Correlation will indicate if there is a relationship between PM size (IV) and the employee health (DV) and the magnitude of that impact if at all there is one
Measure of central tendency. The measure of central tendency majored on the mode even though both mean and median were also used.
Evaluation. The outcomeinvolved dividing populations in Sun Coa ...
ON THE CUBIC EQUATION WITH FIVE UNKNOWNSIJESM JOURNAL
The non-homogeneous Cubic Equation with five unknowns represented by
3 x y z w 12t 4 3 3 3 3 2 ( ) is analyzed for its patterns of non–zero integral solutions. Six
different patterns of non-zero distinct integer solutions are obtained. A few interesting properties
between the solutions and special number patterns namely Polygonal numbers, Centered
Polygonal numbers, Pyramidal numbers, Stella Octangular numbers, Star numbers and
Pentatope number are exhibited.
This document discusses how to estimate multilevel models using SPSS, Stata, SAS, and R. It begins by defining key terminology used in multilevel modeling, distinguishing between fixed and random effects. It then compares model building notation commonly used in social science applications to the matrix notation found in software documentation. The document aims to clarify these concepts and demonstrate the syntax for estimating multilevel models and centering variables in each software package.
The document provides information about statistics and related concepts:
1. It defines statistics and discusses its importance in various fields like agriculture, economics, and administration.
2. It outlines the characteristics of a satisfactory average and describes various measures of central tendency including arithmetic mean, median, and mode.
3. It discusses the steps involved in constructing a frequency distribution table from raw data for both grouped and ungrouped data.
This document discusses key concepts in sampling and sample size determination. It defines population, parameter, sample, and statistic. A target population refers to the entire group a researcher wishes to generalize to, while an accessible population is the specific study population. Parameter describes a numeric characteristic of the entire population, while statistic describes a numeric characteristic of a sample. The document also outlines factors that influence sample size determination, such as population homogeneity and desired precision. It provides examples of sample size calculations using Slovin's formula and Calmorin's formula.
IDENTIFICATION OF OUTLIERS IN OXAZOLINES AND OXAZOLES HIGH DIMENSION MOLECULA...IJDKP
This document summarizes an algorithm called Principal Component Outlier Detection (PrCmpOut) for identifying outliers in high-dimensional molecular descriptor datasets. PrCmpOut uses principal component analysis to transform the data into a lower-dimensional space, where it can more efficiently detect outliers using robust estimators of location and covariance. The properties of PrCmpOut are analyzed and compared to other robust outlier detection methods through simulation studies using a dataset of oxazoline and oxazole molecular descriptors. Numerical results show PrCmpOut performs well at outlier detection in high-dimensional data.
Forecasting the commercial Chisawasawa(Lethrinopsspp.) fishery in Lake MalaŵiIJESM JOURNAL
This document summarizes a study that developed time series models to forecast commercial catches of Chisawasawa fish in Lake Malawi. Researchers used data from 1976 to 2010 to identify the best ARIMA model for forecasting annual commercial catches. The ARIMA (1,1) model was selected as it had the lowest normalized Bayesian information criterion value. Forecasts predicted that commercial catches would increase to 1788 tons by 2020, though the confidence intervals included zero, indicating the fishery may have collapsed. The study recommends urgent attention for this fishery and developing similar models for other fisheries in Lake Malawi.
On Pairs of M- Gonal numbers with unit differenceIJESM JOURNAL
We obtain the ranks of m-gonal numbers such that the difference between any two m-gonal numbers is unity. The recurrence relations satisfied by the ranks of each m-gonal numbers are also presented.
ON HOMOGENEOUS TERNARY QUADRATIC DIOPHANTINE EQUATIONIJESM JOURNAL
The ternary quadratic homogeneous equation representing homogeneous cone given by
2 2 2 2 x y 3xy 16z is analyzed for its non-zero distinct integer points on it. Three different
patterns of integer points satisfying the cone under consideration are obtained. A few interesting
relations between the solutions and special number patterns namely Polygonal number,
Pyramidal number, Centered Polygonal number, Centered Pyramidal number, pronic number and
Star number are presented.
HOMOGENOUS BI-QUADRATIC WITH FIVE UNKNOWNSIJESM JOURNAL
We obtain non-trivial integral solutions for the Homogeneous Bi-quadratic with five unknowns’
4 4 2 2 2 x y 26 z w T . A few interesting relations for each pattern among the solutions are
presented.
STUDY OF RECESSIONAL PATTERN OF GLACIERS OF DHAULIGANGA VALLEY, UTTRAKHAND, I...IJESM JOURNAL
The glacier study is important in the sense that it has a very direct relation with climate change. Any change in the climate can be read through glacier response. A number of Himalayan glaciers are reported to be shrinking. The retreat of Chipa glacier and Jhulang glacier in Dhauliganga valley was studied by interpreting time series optical satellite images obtained from Landsat, ASTER and IRS LISS III sensors. The change of terminus position was measured and retreat was monitored with respect to the terminus position in a topographical map (1:50,000) of 1962. The satellite data of 1989 and from 2000 to 2004 and 2012 were used for monitoring the retreat of the glacier and the result was compared with the field measured values.
The purpose of this paper is to evaluate the contributions of Total Productive Maintenance (TPM) initiatives in reducing equipment breakdowns, increase equipment reliability and improve productivity. This will result in increased equipment utilization and life, reduced work stoppages and machine slowdowns, closer adherence to production and delivery schedules as well as increased employee morale. The Total Productive Maintenance (TPM) concept addresses these goals. The aim of TPM is to keep the plant and equipment at its highest productive level through the cooperation of all areas of the organization. TPM is a partnership between maintenance and production organization to improve product quality, reduce waste, reduce manufacturing cost and increase equipment availability.
Vocal Translation For Muteness People Using Speech SynthesizerIJESM JOURNAL
The research perform has enabled a mute man can speak without surgery. An electrode placed on the neck to get the vibration from blabbering voice of the person and also implement the special speech synthesizer for producing him vowels. It possible for the disable person to produce vowels by thinking of them, using a speech synthesizer. In the future, this breakthrough may help erase the word of speech disability.
Green supply chain management in Indian Electronics & Telecommunication IndustryIJESM JOURNAL
The study investigates the Green Supply Chain Management practices adopted by the Electronics & Telecommunication Industry in India. Study focuses on the impact of environmental collaboration in the supply chain on manufacturing and environmental performance. This paper used inductive and qualitative approaches to explore the salient factors that simultaneously enhance the “greening the supply chain” as well as maximizing the customer reach while maintaining the efficiency of the supply chain system of Electronics & Telecommunication Industry. A survey was conducted with key informants across many divisions of the Electronics & Telecommunication Industry to investigate how well these environmental and customer reach in the supply chain are in synchronized with the top management’s commitment towards environmental responsiveness and maximizing customer orientation. The responses to the survey were statistically analyzed and a relationship model was constructed with Market orientation as the dependent variable and independent variables as: environmental policies, supplier policies, commitment to human capital and diversity, sustainability and market orientation. The paper proposes to measure the performance of the corporation with respect to greening the supply chain, maximizing the reach of consumers and operational efficiency with a view of re-engineering the existing supply chain. The key indicators identified were environmental policies, supplier policies, sustainability, market orientation and commitment to human capital and diversity.
BER Performance Analysis of Open and Closed Loop Power Control in LTEIJESM JOURNAL
The power control (PC) policy in Long Term Evolution (LTE) network is important issue, the interference of cell user to neighbour cell is consider to avoid any annoying to close cells. In this paper the two uplink PC scheme close loop power control (CLPC) and open loop power control (OLPC) are modelled in order to investigate the effects of Mobile cell edge to another cell and show how to adjust the user power according to two path losses. The algorithms were simulated by using MATLAB program. The open loop technique considered that the strongest interference is caused by mobile to neighbour cell, while the power control components is adjusted continuously in the close loop technique . The effects of CLPC and OLPC are shown in term of throughput, path loss, power spectrum density (PSD) and the bit error rate (BER). Results shows that the CLPC is outperform the OLPC in term of throughput, PSD and path loss; while they are perform similarly in term of BER.
Rainfall Trends and Variability in Tamil Nadu (1983 - 2012): Indicator of Cli...IJESM JOURNAL
Rainfall trend for the past is very essential to understand the climate variability of a region and it is very significant research in developing countries. Rainfall variability is also an obligatory factor for the climate of semi arid and tropical regions. The number of rainy days and rainfall intensity are the vital feature to comprehend the climate vulnerability of a region. To attain the nature of climate variability this paper deals the rainfall trends of Tamil nadu for the past 30 years and investigated using spatial, temporal and statistical techniques. The previous woks also revealed that the rainfall variability across the world. The results are also showing the spatial and temporal variability across Tamil Nadu and the climate change projection in study area.
The Political, Legal & Technological Environment in Global ScenarioIJESM JOURNAL
The environment that international managers face is changing rapidly. The past is proving to be a poor indicator of what will happen in the future. Changes are not only more common now but also more significant than ever before, and these dramatic forces of change are creating new challenges. Although there are many dimensions in this new environment, most relevant to international management would be the economic environment that was covered in the research and the cultural environment. Also important are the political, legal and regulatory, and technological dimensions of the environment. The objective of this research is to examine how the political, legal and regulatory, and technological environments have changed in recent years. Some major trends in each that will help dictate the world in which international managers will compete also are presented.
SUPPLIER SELECTION AND EVALUATION – AN INTEGRATED APPROACH OF QFD & AHPIJESM JOURNAL
In current scenario strong competitive pressure forces several organizations to available their products and services, cheaper, faster and improved than the rivals to their valuable customer. Managers have come to comprehend that they cannot do it individually without suitable vendors. Supply Chain Management empower the flows of material, information and funds in a association consisting of customers, suppliers, manufacturers and distributors, which beings raw materials, maintain by internal operations complete with distribution of finished goods. In the continually changing world, assortment of appropriate vender is facilitating in supply chain management, selection of right vendor is extremely useful part of purchasing department. This paper seeks to propose a methodology to integrate the Analytical Hierarchy Process (AHP) for right supplier selection and evaluation and Quality Function Deployment (QFD) analysis to enhance the effectiveness of outsourcing decisions. A selection that combines the subjective factors and objective factors and attitude of the decision maker decide the best supplier in the supply chain management system. The proposed integrated model could be used for supplier selection, which involves several quantitative and qualitative factors. Also could be used to determining the optimum order quantity. The propose method is a group decision making approach which shadows the traditional approaches of supplier selection.
FURTHER CHARACTERIZATIONS AND SOME APPLICATIONS OF UPPER AND LOWER WEAKLY QUA...IJESM JOURNAL
In his paper we characterize upper and lower weakly quasi continuous fuzzy multifunction’s [3] be a new type of convergence of a net in a topological space and also characterize lower weakly quasi continuous fuzzy multifunction by a newly defined convergence of a fuzzy net. Again a new concept of regularity in a topological space has been introduced and characterized and using this regularity several applications of upper weakly quasi continuous fuzzy multifunctions have been shown.
INVARIANCE OF SEPARATION AXIOMS IN ISOTONIC SPACES WITH RESPECT TO PERFECT MA...IJESM JOURNAL
The behavior of separation axioms under perfect mappings has been studied in the realm of topological spaces. In this paper, we extend the characterization of perfect mappings to isotonic spaces and then use this class of continuous functions to investigate the behavior of separation axioms.The hierarchy of separation axioms that is familiar from topological spaces generalizes to spaces with an isotone and expansive closure functions. Neither additivity nor idempotence of the closure function need to be assumed.
MATHEMATICAL MODELLING OF POWER OUTPUT IN A WIND ENERGY CONVERSION SYSTEMIJESM JOURNAL
The developments of wind energy systems have enabled an efficient production and use of wind energy. Three widely used control schemes for wind energy systems are Pitch control, Rotor resistance control and Vector control. A traditional wind energy system consists of a stall regulated or pitch control turbine connected to a synchronous generator through gearbox. The synchronous generator operates at fixed speed and one of the earliest rotor control schemes was the rotor resistance control. The speed of an induction machine is controlled by the external resistance in the rotor circuit. The drawback of the above methods is the inability of wind turbine to capture at low wind speeds. This paper develops a model which maximizes wind energy output. This model asses the effects of friction coefficient and the height of wind above ground(given by the height of turbine from the ground i. e the hub height) on power output. The study considers an already existing model, that is the turbine model in which the study incorporates height and friction coefficient. The three methods used in wind energy systems and the output variations for the different control techniques for a change in the input wind velocity and a constant desired power output reference are compared and the methods evaluated based on the response time and the magnitude of change in the power output compared to the desired power output and also compared by simulation. The results of this study may be useful in aiding in the efficient production of electricity in a wind energy conversion system.
INTRODUCING AN INTEGRATING FACTOR IN STUDYING THE WAGE EQUATIONIJESM JOURNAL
In this paper a first order wage equation is solved by the method of integrating factor. The subsequent wage function is then analyzed and interpreted for stability. The function could initially stand off the equilibrium wage rate but in the long run, it asymptotically stabilizes in inter temporal sense. It is observed that use of an integrating factor in solving the wage equation is just as effective as Laplace transforms demonstrated in [6] but with an advantage of being simple with limited algebra.
Vocal Translation For Muteness People Using Speech SynthesizerIJESM JOURNAL
The research perform has enabled a mute man can speak without surgery. An electrode placed on the neck to get the vibration from blabbering voice of the person and also implement the special speech synthesizer for producing him vowels. It possible for the disable person to produce vowels by thinking of them, using a speech synthesizer. In the future, this breakthrough may help erase the word of speech disability.
ANALYTICAL STUDY OF WAVE MOTION OF A FALLING LIQUID FILM PAST A VERTICAL PLATEIJESM JOURNAL
In this article, the existence of gravity-capillary waves travelling down the surface of a falling liquid film past a vertical plate has been considered. Kapitza’s scheme of finding an approximate expression of velocity u when the film surface assumes an arbitrary shape y = h(x, t), which changes with time, has been emphasized. The expressions for dimensionless wavelength, dimensionless wave number and Weber number have been obtained and are computed for an admissible range of the wave celerity. The stream line pattern has also been studied and presented through graphs.
Ab Initio Study of Pressure Induced Structural, Magnetic and Electronic Prope...IJESM JOURNAL
We have investigated the pressure induced structural and electronic properties of plutonium pnictides (PuY, Y= P, As, Sb). The total energy as a function of volume is obtained by means of self-consistent tight binding linear muffin-in-orbital (TB-LMTO) method within the local spin density approximation (LSDA). From present study with the help of total energy calculations (spin polarized) it is found that PuP, PuAs and PuSb are stable in NaCl – type structure under ambient pressure. The structural stability of PuP, PuAs and PuSb changes under the application of pressure. We predict a structural phase transition from NaCl-type (B1-phase) to CsCl-type (B2-phase) structure for these Pu-pnictides in the pressure range of 20.8 – 42.0 GPa. We also calculate the lattice parameter, bulk modulus, band structure and density of states. From energy band diagram it is observed that all the three compounds exhibit metallic behaviour. The calculated equilibrium lattice parameters and bulk modulus are in good agreement with available experimental data.
SURVEY OF ENERGY EFFICIENT HIGH PERFORMANCE LOW POWER ROUTER FOR NETWORK ON CHIPIJESM JOURNAL
This document summarizes research on efficient router designs for networks-on-chip (NOCs). It surveys several router architectures that aim to improve performance and energy efficiency, including a flexible router that handles requests to busy buffers, a hybrid two-layer router that supports both packet-switched and circuit-switched communications, and a clockless router called MANGO that provides guaranteed services. Fault tolerance is also addressed through simulation-based analysis of transient faults in NOC routers. The document surveys these efficient router designs for high-performance NOCs while balancing the tradeoff between area, power, and performance.
In this paper we present the various elementary traversal approaches for mining frequent pattern to find out association rules. We start with a formal definition of association rule and its basic algorithm. We then discuss the association rule mining algorithms from several perspectives such as breadth first approach, depth first approach and Hybrid approach. Frequent pattern mining has been focused themes in data mining re-search for over a decade. Abundant literature has been dedicated to this research and tremendous progress has been made, ranging from efficient and scalable algorithms for frequent itemset mining in transaction databases to numerous research frontiers, such as sequential pattern mining, structured pattern mining, correlation mining , associative classification, and frequent pattern-based clustering .
Synthesis and Biological Activity Study of Some Heterocycles Derived from Dib...IJESM JOURNAL
Condensation of benzaldehyde with acetone in ethanolic alkaline solution leads to the formation of dibenzalacetone, which when treated with selected nucleophiles undergoes Michael addition to give a variety of heterocyclic compounds, these compounds have been characterized by physical and spectral methods, and also they have been screened for their anti bacterial activities. Key words: Benzaldehyde, Dibenzalacetone, Heterocyclic compounds, Antibacterial activity. Introduction
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.