Comparison of Max100, SWARA and Pairwise Weight Elicitation MethodsIJERA Editor
Decision making is used in every part of life and realised by each action taken. The presence of correct and satisfactory solution to problems is very important for person, institution and organizations. Multiple Criteria Decision Making (MCDM) techniques are developed for this purpose. Based upon the former studies, it is seen that weight elicitation methods used in solving MCDM problems, have an important role at defining the importance of criteria and obtaining the best and satisfying results for decision makers. Theaim of the paperis to compare the results of range variability between the criteria for Max100, Stepwise Weight Assessment Ratio Analysis (SWARA) and Pairwise Comparison weight elicitation methods and to give suggestion about conditions of using of the methods. It is the first time SWARA is compared with Pairwise Comparison and Max100 methods, and it makes this study different. When results of the study is considered, it is seen that variability of Pairwise Comparison method is higher than that Max100 and SWARA methods. Besides, Max100 is found as the easiest method to use, and Pairwise Comparison method’s way of scoring is defined as the most reliable. In the light of the results obtained from the methods, some conditions of usage are suggested.
Leave one out cross validated Hybrid Model of Genetic Algorithm and Naïve Bay...IJERA Editor
This paper presents a new approach to select reduced number of features in databases. Every database has a
given number of features but it is observed that some of these features can be redundant and can be harmful as
well as confuse the process of classification. The proposed method first applies a binary coded genetic algorithm
to select a small subset of features. The importance of these features is judged by applying Naïve Bayes (NB)
method of classification. The best reduced subset of features which has high classification accuracy on given
databases is adopted. The classification accuracy obtained by proposed method is compared with that reported
recently in publications on eight databases. It is noted that proposed method performs satisfactory on these
databases and achieves higher classification accuracy but with smaller number of feature
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The goal of this project is to find the best tool for predicting the life expectancy of people with Hepatitis B. Different Machine Learning methods have been completely studied and various Machine Learning methods have been carried out by different experimenters. Hepatitis B is a worldwide disease with a high mortality rate. Different methods have been used by different researchers to predict the life expectancy of Hepatitis B patients. The Machine Learning models and algorithms such as the Classification model, Logistic Regression model, Recursive Feature Elimination Algorithm, Cirrhosis Mortality model, Extreme Gradient Boosting, Random Forest, Decision Tree have been utilized by different researchers to predict the life expectancy of Hepatitis B patients. Some algorithms and models showed very interesting and proving results whereas some were not that good. Area Under Curve analysis was used to assess the estimation of various models. The AUROC value of the PSO model was minimal, while the ADT model had the highest accuracy. XGBoost showed appropriate predictive performance. All other models showed good calibration.
The presentation has discussed comparatively among three SEM instruments which are (1) SAS CALIS procedure, (2) R's lavaan package, and (3) Mplus version 8.0 on MIDUS II dataset.
Comparison of Max100, SWARA and Pairwise Weight Elicitation MethodsIJERA Editor
Decision making is used in every part of life and realised by each action taken. The presence of correct and satisfactory solution to problems is very important for person, institution and organizations. Multiple Criteria Decision Making (MCDM) techniques are developed for this purpose. Based upon the former studies, it is seen that weight elicitation methods used in solving MCDM problems, have an important role at defining the importance of criteria and obtaining the best and satisfying results for decision makers. Theaim of the paperis to compare the results of range variability between the criteria for Max100, Stepwise Weight Assessment Ratio Analysis (SWARA) and Pairwise Comparison weight elicitation methods and to give suggestion about conditions of using of the methods. It is the first time SWARA is compared with Pairwise Comparison and Max100 methods, and it makes this study different. When results of the study is considered, it is seen that variability of Pairwise Comparison method is higher than that Max100 and SWARA methods. Besides, Max100 is found as the easiest method to use, and Pairwise Comparison method’s way of scoring is defined as the most reliable. In the light of the results obtained from the methods, some conditions of usage are suggested.
Leave one out cross validated Hybrid Model of Genetic Algorithm and Naïve Bay...IJERA Editor
This paper presents a new approach to select reduced number of features in databases. Every database has a
given number of features but it is observed that some of these features can be redundant and can be harmful as
well as confuse the process of classification. The proposed method first applies a binary coded genetic algorithm
to select a small subset of features. The importance of these features is judged by applying Naïve Bayes (NB)
method of classification. The best reduced subset of features which has high classification accuracy on given
databases is adopted. The classification accuracy obtained by proposed method is compared with that reported
recently in publications on eight databases. It is noted that proposed method performs satisfactory on these
databases and achieves higher classification accuracy but with smaller number of feature
TO GET THIS PROJECT COMPLETE SOURCE ON SUPPORT WITH EXECUTION PLEASE CALL BELOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM,WWW.FINALYEAR-IEEEPROJECTS.COM, EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
The goal of this project is to find the best tool for predicting the life expectancy of people with Hepatitis B. Different Machine Learning methods have been completely studied and various Machine Learning methods have been carried out by different experimenters. Hepatitis B is a worldwide disease with a high mortality rate. Different methods have been used by different researchers to predict the life expectancy of Hepatitis B patients. The Machine Learning models and algorithms such as the Classification model, Logistic Regression model, Recursive Feature Elimination Algorithm, Cirrhosis Mortality model, Extreme Gradient Boosting, Random Forest, Decision Tree have been utilized by different researchers to predict the life expectancy of Hepatitis B patients. Some algorithms and models showed very interesting and proving results whereas some were not that good. Area Under Curve analysis was used to assess the estimation of various models. The AUROC value of the PSO model was minimal, while the ADT model had the highest accuracy. XGBoost showed appropriate predictive performance. All other models showed good calibration.
The presentation has discussed comparatively among three SEM instruments which are (1) SAS CALIS procedure, (2) R's lavaan package, and (3) Mplus version 8.0 on MIDUS II dataset.
Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential E...ijseajournal
Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have been
solved by using DE based clustering methods but these methods may fail to find clusters hidden in
subspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed in
literature to find subspace clusters that are present in subspaces of dataset. In this paper we propose
VINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE optimizes a
hybrid cluster validation index. Subspace Clustering Quality Estimate index (SCQE index) is used for
internal cluster validation and Gini index gain is used for external cluster validation in the proposed hybrid
cluster validation index. Proposed method is applied on Wisconsin breast cancer dataset
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
A Two Stage Estimator of Instrumental Variable Quantile Regression for Panel ...ijtsrd
This paper proposes a two stage instrumental variable quantile regression 2S IVQR estimation to estimate the time invariant effects in panel data model. In the first stage, we introduce the dummy variables to represent the time invariant effects, and use quantile regression to estimate effects of individual covariates. The advantage of the first stage is that it can reduce calculations and the number of estimation parameters. Then in the second stage, we adapt instrument variables approach and 2SLS method. In addition, we present a proof of 2S IVQR estimators large sample properties. Monte Carlo simulation study shows that with increasing sample size, the Bias and RMSE of our estimator are decreased. Besides, our estimator has lower Bias and RMSE than those of the other two estimators. Tao Li "A Two-Stage Estimator of Instrumental Variable Quantile Regression for Panel Data with Time-Invariant Effects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47716.pdf Paper URL : https://www.ijtsrd.com/other-scientific-research-area/other/47716/a-twostage-estimator-of-instrumental-variable-quantile-regression-for-panel-data-with-timeinvariant-effects/tao-li
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Statistical modelling is of prime importance in each and every sphere of data analysis. This paper reviews the justification of fitting linear model to the collected data. Inappropriateness of the fitted model may be due two reasons 1.wrong choice of the analytical form, 2. Suffers from the adverse effects of outliers and/or influential observations. The aim is to identify outliers using the deletion technique. In I extend the result of deletion diagnostics to the ex- changeable model and reviews some results of model analytical form checking and the technique illustrated through an example.
Classification accuracy analyses using Shannon’s EntropyIJERA Editor
There are many methods for determining the Classification Accuracy. In this paper significance of Entropy of
training signatures in Classification has been shown. Entropy of training signatures of the raw digital image
represents the heterogeneity of the brightness values of the pixels in different bands. This implies that an image
comprising a homogeneous lu/lc category will be associated with nearly the same reflectance values that would
result in the occurrence of a very low entropy value. On the other hand an image characterized by the
occurrence of diverse lu/lc categories will consist of largely differing reflectance values due to which the
entropy of such image would be relatively high. This concept leads to analyses of classification accuracy.
Although Entropy has been used many times in RS and GIS but its use in determination of classification
accuracy is new approach.
Comparative Analysis of Face Recognition Algorithms for Medical ApplicationAM Publications
Biometric-based techniques have emerged for recognizing individuals authenticating people. In the field of
face recognition, plastic surgery based face recognition is still a lesser explored area. Thus the use of face recognition for
surgical faces introduces the new challenge for designing future face recognition system. Face recognition after plastic
surgery can lead to rejection of genuine users or acceptance of impostors. Transmuting facial geometry and texture
increases the intra-class variability between the pre- and post-surgery images of the same individual. Therefore, matching
post-surgery images with pre-surgery images becomes a difficult task for automatic face recognition algorithms. This paper
deals with testing of two popular face recognition algorithms on plastic surgery database such as PCA and LDA and
compared this algorithms based on Recognition Rate for better performance. Finally, the results are concluded.
Analytical Hierarchical Process has been used as a useful methodology for multi-criteria decision making environments with substantial applications in recent years. But the weakness of the traditional AHP method lies in the use of subjective judgement based assessment and standardized scale for pairwise comparison matrix creation. The paper proposes a Condorcet Voting Theory based AHP method to solve multi criteria decision making problems where Analytical Hierarchy Process (AHP) is combined with Condorcet theory based preferential voting technique followed by a quantitative ratio method for framing the comparison matrix instead of the standard importance scale in traditional AHP approach. The consistency ratio (CR) is calculated for both the approaches to determine and compare the consistency of both the methods. The results reveal Condorcet- AHP method to be superior generating lower consistency ratio and more accurate ranking of the criterion for solving MCDM problems.
Karah Arriaga
1 posts
Re:Module 6 DQ 2
Quantitative studies involve the numerical analysis of variables within a research study. In Quantitative studies, variables can be mathematically and statistically analyzed to explain certain theories and their correlations (Yilmaz, 2013). A correlational design is important in quantitative program evaluations because the variables presented will be measured on a scale. It would be appropriate to calculate the correlation coefficients to get a measurement of the linear association between the variables in the study. The statistical technique used in a study depends on several factors such as the number of variables being studied, the types of variables, and the nature of the overall evaluation. For example, if there are two or more variables that are quantitative and evaluating a relationship or prediction, then a multiple regression could be used. A multiple regression is similar to a linear analysis and it is used to predict the values of a variable based on two or more other variables in the study (Norusis, 2010).
References
Norusis, M. J., (2010). PASW Statistics 18 Guide to Data Analysis . Upper Saddle River, NJ: Prentice Hall.
Yilmaz, K. (2013). Comparison of Quantitative and Qualitative Research Traditions: epistemological, theoretical, and methodological differences. European Journal Of Education, 48(2), 311-325. doi:10.1111/ejed.12014
Reply | Quote & Reply
...
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential E...ijseajournal
Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have been
solved by using DE based clustering methods but these methods may fail to find clusters hidden in
subspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed in
literature to find subspace clusters that are present in subspaces of dataset. In this paper we propose
VINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE optimizes a
hybrid cluster validation index. Subspace Clustering Quality Estimate index (SCQE index) is used for
internal cluster validation and Gini index gain is used for external cluster validation in the proposed hybrid
cluster validation index. Proposed method is applied on Wisconsin breast cancer dataset
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
This Presentation is on recommended system on question paper predication using machine learning techniques. We did literature survey and implement using same technique.
A Two Stage Estimator of Instrumental Variable Quantile Regression for Panel ...ijtsrd
This paper proposes a two stage instrumental variable quantile regression 2S IVQR estimation to estimate the time invariant effects in panel data model. In the first stage, we introduce the dummy variables to represent the time invariant effects, and use quantile regression to estimate effects of individual covariates. The advantage of the first stage is that it can reduce calculations and the number of estimation parameters. Then in the second stage, we adapt instrument variables approach and 2SLS method. In addition, we present a proof of 2S IVQR estimators large sample properties. Monte Carlo simulation study shows that with increasing sample size, the Bias and RMSE of our estimator are decreased. Besides, our estimator has lower Bias and RMSE than those of the other two estimators. Tao Li "A Two-Stage Estimator of Instrumental Variable Quantile Regression for Panel Data with Time-Invariant Effects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47716.pdf Paper URL : https://www.ijtsrd.com/other-scientific-research-area/other/47716/a-twostage-estimator-of-instrumental-variable-quantile-regression-for-panel-data-with-timeinvariant-effects/tao-li
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Statistical modelling is of prime importance in each and every sphere of data analysis. This paper reviews the justification of fitting linear model to the collected data. Inappropriateness of the fitted model may be due two reasons 1.wrong choice of the analytical form, 2. Suffers from the adverse effects of outliers and/or influential observations. The aim is to identify outliers using the deletion technique. In I extend the result of deletion diagnostics to the ex- changeable model and reviews some results of model analytical form checking and the technique illustrated through an example.
Classification accuracy analyses using Shannon’s EntropyIJERA Editor
There are many methods for determining the Classification Accuracy. In this paper significance of Entropy of
training signatures in Classification has been shown. Entropy of training signatures of the raw digital image
represents the heterogeneity of the brightness values of the pixels in different bands. This implies that an image
comprising a homogeneous lu/lc category will be associated with nearly the same reflectance values that would
result in the occurrence of a very low entropy value. On the other hand an image characterized by the
occurrence of diverse lu/lc categories will consist of largely differing reflectance values due to which the
entropy of such image would be relatively high. This concept leads to analyses of classification accuracy.
Although Entropy has been used many times in RS and GIS but its use in determination of classification
accuracy is new approach.
Comparative Analysis of Face Recognition Algorithms for Medical ApplicationAM Publications
Biometric-based techniques have emerged for recognizing individuals authenticating people. In the field of
face recognition, plastic surgery based face recognition is still a lesser explored area. Thus the use of face recognition for
surgical faces introduces the new challenge for designing future face recognition system. Face recognition after plastic
surgery can lead to rejection of genuine users or acceptance of impostors. Transmuting facial geometry and texture
increases the intra-class variability between the pre- and post-surgery images of the same individual. Therefore, matching
post-surgery images with pre-surgery images becomes a difficult task for automatic face recognition algorithms. This paper
deals with testing of two popular face recognition algorithms on plastic surgery database such as PCA and LDA and
compared this algorithms based on Recognition Rate for better performance. Finally, the results are concluded.
Analytical Hierarchical Process has been used as a useful methodology for multi-criteria decision making environments with substantial applications in recent years. But the weakness of the traditional AHP method lies in the use of subjective judgement based assessment and standardized scale for pairwise comparison matrix creation. The paper proposes a Condorcet Voting Theory based AHP method to solve multi criteria decision making problems where Analytical Hierarchy Process (AHP) is combined with Condorcet theory based preferential voting technique followed by a quantitative ratio method for framing the comparison matrix instead of the standard importance scale in traditional AHP approach. The consistency ratio (CR) is calculated for both the approaches to determine and compare the consistency of both the methods. The results reveal Condorcet- AHP method to be superior generating lower consistency ratio and more accurate ranking of the criterion for solving MCDM problems.
Karah Arriaga
1 posts
Re:Module 6 DQ 2
Quantitative studies involve the numerical analysis of variables within a research study. In Quantitative studies, variables can be mathematically and statistically analyzed to explain certain theories and their correlations (Yilmaz, 2013). A correlational design is important in quantitative program evaluations because the variables presented will be measured on a scale. It would be appropriate to calculate the correlation coefficients to get a measurement of the linear association between the variables in the study. The statistical technique used in a study depends on several factors such as the number of variables being studied, the types of variables, and the nature of the overall evaluation. For example, if there are two or more variables that are quantitative and evaluating a relationship or prediction, then a multiple regression could be used. A multiple regression is similar to a linear analysis and it is used to predict the values of a variable based on two or more other variables in the study (Norusis, 2010).
References
Norusis, M. J., (2010). PASW Statistics 18 Guide to Data Analysis . Upper Saddle River, NJ: Prentice Hall.
Yilmaz, K. (2013). Comparison of Quantitative and Qualitative Research Traditions: epistemological, theoretical, and methodological differences. European Journal Of Education, 48(2), 311-325. doi:10.1111/ejed.12014
Reply | Quote & Reply
...
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Similar to Calibration approach for parameter estimation.pptx (20)
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
Calibration approach for parameter estimation.pptx
1. SOME IMPROVED METHODS OF ESTIMATION FOR FINITE
POPULATION PARAMETERS
Presentation
(For JRF to SRF Upgradation)
Under the Supervision of:- Submitted by:-
Dr. M. K. Chaudhary Basant Kumar Ray
(Associate Professor) (Research Scholar)
Department of Statistics
Institute of Science
Banaras Hindu University
Varanasi – 221005
2. Introduction
The auxiliary information can be used in many ways to obtain an
improved estimator of an unknown finite population parameter in
survey sampling. Some well-known techniques that use auxiliary
information are the ratio method of estimation, product method of
estimation, regression method of estimation, combined ratio-type
of estimation, unbiased ratio-type of estimation, etc. These
techniques incorporate auxiliary information to improve the
precision of the estimation procedure. In our study, we have used
calibration approach to incorporate the auxiliary information and
find the improved estimators for the finite population parameters
(such as mean, total, variance, etc).
3. What is Calibration Approach:-The calibration estimation approach
is a reweighting technique that incorporate the auxiliary information
in order to find the improved estimators for the finite population
parameters.
The first question is "how to incorporate the auxiliary variable into
the estimation process." The answer to this question is "by using
some constraints based on the auxiliary variable."
The calibration approach produces a set of optimum weights, and
these calibrated weights are supposed as close as possible to the
design weights. The first requirement for the calibration approach is
"the calibrated weights must be as close as possible to the design
weights." The first requirement is satisfied by taking a distance
function (such as chi-square type distance, modified chi-square type
distance function, Hellinger type distance function, etc.).”
4. This distance function gives the distance between calibrated weights and
design weights. In our study, we have considered only the chi-square type
distance function to find the expression of calibrated weights in explicit
form and reduce the complexity of the computation. For our purpose, we
minimize the appropriate distance function mathematically with respect to
calibrated weights.
The calibrated weights are such that it gives the exactly known
population characteristics (such as mean, total, variance, etc.) when applied
to the sample values of auxiliary variables.
The other interesting question is, "Why are we using the calibration
approach only and not the other methods." The answer is "because of its
benefits." Some of its merits are listed below:-
5. Some Merits of the Calibration Approach:
The calibrated weights are such that these give the exactly known
population characteristics (such as mean, total, variance etc) when
applied to the sample auxiliary variable. Since calibrated weights are
giving an exact estimate of the population characteristics for auxiliary
variables and auxiliary variables are strongly correlated with the
study variable, it should also work well for the study variable.
Traditional methods use response models in the non-response
estimation process. There is no need for a response model in the
calibration approach.
Traditional methods are sometimes complex, time-consuming, and
inconvenient. The calibration approach method is a simple, effective,
and convenient method for the estimation of the finite population
parameters.
6. The calibration approach involves easy computation for calibrated
weights.
Statistical organizations of some countries have developed software to
compute the calibrated weights like GES (Statistics Canada’s general
software, CLAN (Statistics Sweden).
Some Demerits of the Calibration Approach:
Sometimes we get negative calibrated weights for the chi-square type
distance function. There is no sense of these negative calibrated weights
in some situations.
Sometimes there do not always exist solutions for some distance
functions. There always exist solutions for chi-square distance function
and modified minimum entropy distance function.
For some distance functions, we cannot obtain a linear form or closed-
form expression for the calibrated weights. For these types of expression,
it is not easy to further treatment.
7. Literature Review
The first attempt for the calibration estimation approach was made by
Deville and Särndal (1992) in the survey sampling. Singh et al. (1998)
proposed a calibration estimator for population mean in stratified
random sampling by using one constraint based on a single auxiliary
variable. Tracy et al. (2003) proposed a calibration estimator for
population mean in stratified random sampling using two constraints
based on a single auxiliary variable. Rao et al. (2012) used the
calibration approach to find the calibration estimator of the population
mean in stratified random sampling using two constraints based on two
auxiliary variables.
8. Nidhi et al. (2017) proposed calibration estimators of population mean in
stratified random sampling and stratified double sampling. Some other
authors who worked on the calibration approach are Estevao and Säarndal
(2006), Kim (2007), Särndal, C. E. (2007), Kim and Park (2010), Koyuncu
and Kadilar (2014), Clement and Enang (2015), Koyuncu and Kadilar
(2016) etc.
Several attempts were made by researchers to deal with the problem
of non-response. The first effort was made by Hansen and Hurwitz (1946).
Qasim (2014) used the calibration approach to estimate the population total
in presence non-response in simple random sampling and pareto sampling.
Some other names of authors working on non-response problem are Khare
and Srivastava (1993), Lundström and Särndal (1999), Chang and Kott
(2007), Chaudhary et al. (2014), Andersson and Särndal (2016), Chaudhary
et al. (2020) etc.
9. Objectives of the Present Study:-
The objectives of the present study are
To propose efficient calibration estimators for the finite population
mean in stratified sampling and stratified double sampling motivated
by Nidhi et al. (2017). For this purpose, we have used the chi-square
type distance function and two calibration constraints for each
stratum. Empirical study has been also performed to check efficiency
of proposed calibration estimators.
To propose efficient calibration estimators for the finite population
mean in stratified sampling under the presence of non-response
motivated by Dykes et al. (2015) and Qasim (2014). For this purpose,
we have used the chi-square type distance function and some
calibration constraints. Empirical studies have been also performed to
check efficiency of proposed calibration estimators.
10. To propose calibration estimators for finite population mean in
stratified sampling by improving the efficiency of the Hansen and
Hurwitz's (1946) estimator when some units don't respond.
Empirical studies have been also performed to check efficiency of
proposed calibration estimators.
To propose generalized type of calibration estimators for the finite
population mean in stratified sampling and stratified double
sampling. We have used the chi-square type distance function and n
calibration constraints in this study.
To propose efficient calibration estimators for the finite population
variance in stratified sampling using different calibration constraints
and chi-square type distance function. Empirical studies have been
also performed to check efficiency of proposed calibration
estimators.
11. Research Paper Accepted for Publication:
Accepted a research paper entitled, “A Calibration Based Approach on
Estimation of Mean of a Stratified Population in the Presence of Non-
Response'' in Journal “Communications in Statistics – Theory and
Methods”.
Published Research Papers:
Chaudhary, M. K., Prajapati, A., & Ray, B. K. (2021). Some convex-
type classes of estimators of finite population mean under random
non-response. Journal of Information and Optimization
Sciences, 42(8), 1951-1965.
Chaudhary, M. K., Dutta, T., & Ray, B. K. (2021). A New Calibration
Estimator of Population Mean in Two-Stage Sampling Design using
Population Level Auxiliary Information. International Journal of
Statistics and Reliability Engineering, 8(1), 112-120.
12. Conference Attended:
Presented a research paper entitled, “Calibration Approach for
Estimating the Mean of a Stratified Population in the Presence of
Non-response '' in conference RASTA-2021 held at Department of
Statistics, Institute of Science, Banaras Hindu University, during
December 15-17, 2021.
Presented a research paper entitled, “Calibration approach for
estimating the population mean under stratified single/ two phase
sampling scheme'' in conference MMA-2022 held at Department of
Mathematics, Institute of Science, Banaras Hindu University, during
29-30 January 2022.
13. References
[1] Andersson, P. G., & Särndal, C. E. (2016). Calibration for non-response
treatment using auxiliary information at different levels. In The Fifth
International Conference on Establishment Surveys (ICES-V), Geneva,
Switzerland, June 20–23, 2016.
[2] Chang, T., & Kott, P. S. (2007). Using Calibration Weighting to Adjust
for Nonresponse Under a Plausible Model (with full appendices) (No.
1496-2016-130586).
[3] Chaudhary, M. K., Kumar, A., Vishwakarma, G. K., & Kadilar, C. (2020).
Family of combined-type estimators for population mean using stratified
two-phase sampling scheme under non-response. Journal of Statistics and
Management Systems, 1-14.
[4] Chaudhary, M. K., Prajapati, A., & Singh, R. (2014). Two-phase sampling
in estimation of population mean in the presence of non-response. Infinite
Study.
14. [5] Dykes, L., Singh, S., A. Sedory, S., & Louis, V. (2015). Calibrated
estimators of population mean for a mail survey
design. Communications in Statistics-Theory and Methods, 44(16),
3403-3427.
[6] Khare, B. B., & Srivastava, S. (1993). Estimation of population mean
using auxiliary character in presence of non-response. National
Academy Science Letters, 16, 111-111.
[7] Kim, J. M., Sungur, E. A., & Heo, T. Y. (2007). Calibration approach
estimators in stratified sampling. Statistics & probability letters, 77(1),
99-103.
[8] Koyuncu, N., & Kadilar, C. (2014). A new calibration estimator in
stratified double sampling. Hacettepe Journal of Mathematics and
Statistics, 43(2), 1-9.
[9] Lundström, S., & Särndal, C. E. (1999). Calibration as a standard
method for treatment of nonresponse. Journal of official
statistics, 15(2), 305.
15. [10] Nidhi, Sisodia, B. V. S., Singh, S., & Singh, S. K. (2017).
Calibration approach estimation of the mean in stratified sampling
and stratified double sampling. Communications in Statistics-Theory
and Methods, 46(10), 4932-4942.
[11] Qasim, M. (2014). Calibration Estimation under Nonresponse
based on Simple Random Sampling Vs Pareto Sampling.
[12] Reddy, M. K., Rao, K. R., & Boiroju, N. K. (2010). Comparison of
ratio estimators using Monte Carlo simulation. International
Journal of Agriculture and Statistical Sciences, 6(2), 517-527.
[13] Särndal, C. E. (2007). The calibration approach in survey theory
and practice. Survey Methodology, 33(2), 99-119.
[14] Särndal, C. E., & Lundström, S. (2005). Estimation in surveys with
nonresponse. John Wiley & Sons.
[15] Singh S., Horn S., Yu F. (1998). Estimation variance of general
regression estimator: higher level calibration approach. Survey
Methodology, 48:41–50.
[16] Singh, S. (2003). Advanced Sampling Theory With Applications:
How Michael"" Selected"" Amy (Vol. 2). Springer Science &
Business Media.