A NOVEL PERFORMANCE MEASURE FOR MACHINE LEARNING CLASSIFICATIONIJMIT JOURNAL
Machine learning models have been widely used in numerous classification problems and performance measures play a critical role in machine learning model development, selection, and evaluation. This paper covers a comprehensive overview of performance measures in machine learning classification. Besides, we proposed a framework to construct a novel evaluation metric that is based on the voting results of three performance measures, each of which has strengths and limitations. The new metric can be proved better than accuracy in terms of consistency and discriminancy.
A Novel Performance Measure for Machine Learning ClassificationIJMIT JOURNAL
Machine learning models have been widely used in numerous classification problems and performance measures play a critical role in machine learning model development, selection, and evaluation. This paper covers a comprehensive overview of performance measures in machine learning classification. Besides, we proposed a framework to construct a novel evaluation metric that is based on the voting results of three performance measures, each of which has strengths and limitations. The new metric can be proved better than accuracy in terms of consistency and discriminancy.
A Novel Performance Measure For Machine Learning ClassificationKarin Faust
This document provides a comprehensive overview of performance measures used to evaluate machine learning classification models. It discusses measures based on confusion matrices (e.g. accuracy, recall, precision), probability deviations (e.g. mean absolute error, brier score, cross entropy), and discriminatory power (e.g. ROC curves, AUC). The document highlights limitations of individual measures and proposes a framework to construct a new composite measure based on voting results from existing measures to better evaluate model performance.
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
A NOVEL PERFORMANCE MEASURE FOR MACHINE LEARNING CLASSIFICATIONIJMIT JOURNAL
Machine learning models have been widely used in numerous classification problems and performance measures play a critical role in machine learning model development, selection, and evaluation. This paper covers a comprehensive overview of performance measures in machine learning classification. Besides, we proposed a framework to construct a novel evaluation metric that is based on the voting results of three performance measures, each of which has strengths and limitations. The new metric can be proved better than accuracy in terms of consistency and discriminancy.
A Novel Performance Measure for Machine Learning ClassificationIJMIT JOURNAL
Machine learning models have been widely used in numerous classification problems and performance measures play a critical role in machine learning model development, selection, and evaluation. This paper covers a comprehensive overview of performance measures in machine learning classification. Besides, we proposed a framework to construct a novel evaluation metric that is based on the voting results of three performance measures, each of which has strengths and limitations. The new metric can be proved better than accuracy in terms of consistency and discriminancy.
A Novel Performance Measure For Machine Learning ClassificationKarin Faust
This document provides a comprehensive overview of performance measures used to evaluate machine learning classification models. It discusses measures based on confusion matrices (e.g. accuracy, recall, precision), probability deviations (e.g. mean absolute error, brier score, cross entropy), and discriminatory power (e.g. ROC curves, AUC). The document highlights limitations of individual measures and proposes a framework to construct a new composite measure based on voting results from existing measures to better evaluate model performance.
PRIORITIZING THE BANKING SERVICE QUALITY OF DIFFERENT BRANCHES USING FACTOR A...ijmvsc
In recent years, India’s service industry is developing rapidly. The objective of the study is to explore the
dimensions of customer perceived service quality in the context of the Indian banking industry. In order to
categorize the customer needs into quality dimensions, Factor analysis (FA) has been carried out on
customer responses obtained through questionnaire survey. Analytic Hierarchy Process (AHP) is employed
to determine the weights of the banking service quality dimensions. The priority structure of the quality
dimensions provides an idea for the Banking management to allocate the resources in an effective manner
to achieve more customer satisfaction. Technique for Order Preference Similarity to Ideal Solution
(TOPSIS) is used to obtain final ranking of different branches.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
On Confidence Intervals Construction for Measurement System Capability Indica...IRJESJOURNAL
Abstract: There are many criteria that have been proposed to determine the capability of a measurement system, all based on estimates of variance components. Some of them are the Precision to Tolerance Ratio, the Signal to Noise Ratio and the probabilities of misclassification. For most of these indicators, there are no exact confidence intervals, since the exact distributions of the point estimators are not known. In such situations, two approaches are widely used to obtain approximate confidence intervals: the Modified Large Samples (MLS) methods initially proposed by Graybill and Wang, and the construction of Generalized Confidence Intervals (GCI) introduced by Weerahandi. In this work we focus on the construction of the confidence intervals by the generalized approach in the context of Gauge repeatability and reproducibility studies. Since GCI are obtained by simulation procedures, we analyze the effect of the number of simulations on the variability of the confidence limits as well as the effect of the size of the experiment designed to collect data on the precision of the estimates. Both studies allowed deriving some practical implementation guidelinesin the use of the GCI approach. We finally present a real case study in which this technique was applied to evaluate the capability of a destructive measurement system.
The document appears to be a statistics assignment submitted by a student analyzing daily stock price data of SBI, ICICI and HDFC banks from January 2012 to October 2012.
Key findings from the analysis include: SBI had the highest average price and turnover, while ICICI had the lowest variability in stock prices. A positive skewness was found for ICICI, indicating more high values, while SBI had a negative skewness. Correlation coefficients were computed between the stock prices and total traded quantities, and linear regression equations were formulated. Overall, the analysis aimed to identify which of the three bank stocks exhibited the most consistent patterns for investment purposes.
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
IRJET- Overview of Forecasting TechniquesIRJET Journal
This document provides an overview of different forecasting techniques, including qualitative and quantitative methods. It discusses several qualitative techniques like the Delphi method, consumer market surveys, and jury of executive opinion. It also examines various quantitative techniques such as the moving average method, weighted moving average method, exponential smoothing, and least squares. The document serves to introduce students to common forecasting approaches and provide examples of each type of technique.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Implementation of SEM Partial Least Square in Analyzing the UTAUT ModelAJHSSR Journal
ABSTRACT:Partial Least Squares (PLS) Structural Equation Modeling (PLS-SEM) is a statistical technique
used to analyze the expected connections between constructs by evaluating the existence of correlations or
impacts among these constructs. The objective of this work is to employ the Structural Equation Modeling
(SEM) technique, specifically Partial Least Squares (PLS), to investigate the Unified Theory of Acceptance and
Use of Technology (UTAUT) model in the specific domain of payment technology acceptance and utilization.
The UTAUT model encompasses latent variables classified into independent, mediator, moderator, and
dependent categories. Hence, the appropriate approach, the partial least squares structural equation modeling
(PLS-SEM) method, was chosen. The rationale behind this decision is the capability of PLS-SEM to assess
models with a relatively limited dataset, as demonstrated in this study, which included a sample of 50
participants. This study employs a quantitative methodology utilizing a survey-based approach to gather data via
questionnaires. The UTAUT model in the technology acceptance and use domain was accurately assessed by
PLS-SEM, as evidenced by the findings. The findings have substantial implications for comprehending the
factors that influence the adoption of payment technology, specifically focusing on the linkages between
constructs in the UTAUT model. This research validates the model and establishes a foundation for a more
profound comprehension of user behavior in accepting and utilizing payment technologies. Ultimately, using
PLS-SEM demonstrated its efficacy in examining the UTAUT model.
KEYWORDS :Structural Equation Model, Partial Least Square, UTAUT
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Innovative sample size methods for adaptive clinical trials webinar web ver...nQuery
View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
An assessment of the the BER's manufacturing survey in South AfricaGeorge Kershoff
This document analyzes the impact of weight adjustment on the accuracy of business tendency survey (BTS) results in South Africa. It compares BTS results calculated using only firm and sector weights to results calculated with additional ex post weight adjustment. Weight adjustment accounts for non-responses by increasing weights of respondents. The correlation between adjusted-weight results and a reference series is lower than for unadjusted-weight results, suggesting weight adjustment does not improve accuracy. This finding supports the BER's current weighting methodology and indicates BTS results are robust to weighting methods when a business register is unavailable.
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
Assigning Scores For Ordered Categorical ResponsesMary Montoya
This document summarizes a research article that proposes a new method for assigning scores to ordered categorical response variables in statistical analysis. Specifically, it discusses the ordered stereotype model, which allows for uneven spacing between categories of an ordinal variable through estimated score parameters. The article presents simulation studies showing the disadvantages of assuming equal spacing, and applies the ordered stereotype model to a real dataset, demonstrating non-equal spacing. It also proposes a new median measure for ordinal data based on estimated score parameters from the ordered stereotype model.
This document summarizes a research study that developed a discriminant analysis model to classify loan applications as accepted or rejected using ranked data. The study used a sample of 350 loan applications, including variables like credit rating, occupation, loan-to-value ratio, and payment-to-income ratio. Rank transformation was applied to minimize outliers and non-normality. Statistical software was used to generate classification functions and classify applications. The resulting model based on ranked data provided accurate classifications without violating assumptions of traditional discriminant analysis.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
This document summarizes methods for establishing meaningful performance expectations across different test forms by setting invariant latent standards along the underlying competence continuum, rather than cutscores that vary by test content. It describes how Angoff ratings can be analyzed using item response curves to identify the latent threshold (θ*) representing each performance level. Preliminary analyses of expert ratings for a licensure exam show ratings better differentiated item difficulties and performance levels after aligning with item curves, and several methods for deriving θ* from the ratings are demonstrated and compared.
Impact of Perceived Fairness on Performance Appraisal System for Academic Sta...IJSRP Journal
This study investigates the employees’ perception of fairness in the performance appraisal system for academic staff of the General Sir Jhon Kotelawala Defence University.
97_INTER_NC1 INTER BANK ANALYSIS OF COST EFFICIENCY USING MEAN.pdfDR BHADRAPPA HARALAYYA
This document summarizes a study analyzing the cost efficiency of scheduled commercial banks in India from 1995-2013 using data envelopment analysis. The study finds that technical inefficiency stems mainly from poor performance in controlling input misuse and inability to operate at optimal scale. Public sector banks have higher average technical efficiency than private and foreign banks, but all banks have room for improved allocative efficiency by using inputs in optimal proportions. The gap between technical efficiency and inefficiency decreased over time for most banks, indicating improved performance. However, foreign banks consistently outperformed public and private banks in all efficiency measures.
This document provides an overview of electronic payment and e-finance systems in India. It discusses various electronic funds transfer systems used by banks in India such as Real Time Gross Settlement (RTGS), National Electronic Funds Transfer (NEFT), Electronic Clearing System (ECS), Immediate Payment Service (IMPS), and core banking solutions that allow customers to access accounts from any branch. It also mentions communication networks like INFINET that connect banks and the role of SWIFT and other international payment systems in facilitating domestic and international funds transfers.
Emerging Global Strategies for Indian Industry Bhadrappa Haralayya.pdfDR BHADRAPPA HARALAYYA
- The document analyzes the weak form efficiency of the Indian stock market, with a specific focus on the National Stock Exchange (NSE).
- It employs statistical tests like run tests and autocorrelation tests on monthly closing index values from 2000-2013 to analyze randomness and independence of stock price changes over time.
- The results of the statistical tests show that the NSE is inefficient in the weak form, as past stock price data can be used to predict future price movements, violating the random walk hypothesis of weak form efficiency.
More Related Content
Similar to HST- 0621-151 PAPER ANALYSIS OF BANK PRODUCTIVITY USING PANEL CAUSALITY TEST.pdf
The document appears to be a statistics assignment submitted by a student analyzing daily stock price data of SBI, ICICI and HDFC banks from January 2012 to October 2012.
Key findings from the analysis include: SBI had the highest average price and turnover, while ICICI had the lowest variability in stock prices. A positive skewness was found for ICICI, indicating more high values, while SBI had a negative skewness. Correlation coefficients were computed between the stock prices and total traded quantities, and linear regression equations were formulated. Overall, the analysis aimed to identify which of the three bank stocks exhibited the most consistent patterns for investment purposes.
A LINEAR REGRESSION APPROACH TO PREDICTION OF STOCK MARKET TRADING VOLUME: A ...ijmvsc
Predicting daily behavior of stock market is a serious challenge for investors and corporate stockholders and it can help them to invest with more confident by taking risks and fluctuations into consideration. In this paper, by applying linear regression for predicting behavior of S&P 500 index, we prove that our proposed method has a similar and good performance in comparison to real volumes and the stockholders can invest confidentially based on that.
IRJET- Overview of Forecasting TechniquesIRJET Journal
This document provides an overview of different forecasting techniques, including qualitative and quantitative methods. It discusses several qualitative techniques like the Delphi method, consumer market surveys, and jury of executive opinion. It also examines various quantitative techniques such as the moving average method, weighted moving average method, exponential smoothing, and least squares. The document serves to introduce students to common forecasting approaches and provide examples of each type of technique.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Implementation of SEM Partial Least Square in Analyzing the UTAUT ModelAJHSSR Journal
ABSTRACT:Partial Least Squares (PLS) Structural Equation Modeling (PLS-SEM) is a statistical technique
used to analyze the expected connections between constructs by evaluating the existence of correlations or
impacts among these constructs. The objective of this work is to employ the Structural Equation Modeling
(SEM) technique, specifically Partial Least Squares (PLS), to investigate the Unified Theory of Acceptance and
Use of Technology (UTAUT) model in the specific domain of payment technology acceptance and utilization.
The UTAUT model encompasses latent variables classified into independent, mediator, moderator, and
dependent categories. Hence, the appropriate approach, the partial least squares structural equation modeling
(PLS-SEM) method, was chosen. The rationale behind this decision is the capability of PLS-SEM to assess
models with a relatively limited dataset, as demonstrated in this study, which included a sample of 50
participants. This study employs a quantitative methodology utilizing a survey-based approach to gather data via
questionnaires. The UTAUT model in the technology acceptance and use domain was accurately assessed by
PLS-SEM, as evidenced by the findings. The findings have substantial implications for comprehending the
factors that influence the adoption of payment technology, specifically focusing on the linkages between
constructs in the UTAUT model. This research validates the model and establishes a foundation for a more
profound comprehension of user behavior in accepting and utilizing payment technologies. Ultimately, using
PLS-SEM demonstrated its efficacy in examining the UTAUT model.
KEYWORDS :Structural Equation Model, Partial Least Square, UTAUT
ABSTRACT : This paper critically examined a broad view of Structural Equation Model (SEM) with a view
of pointing out direction on how researchers can employ this model to future researches, with specific focus on
several traditional multivariate procedures like factor analysis, discriminant analysis, path analysis. This study
employed a descriptive survey and historical research design. Data was computed viaDescriptive Statistics,
Correlation Coefficient, Reliability. The study concluded that Novice researchers must take care of assumptions
and concepts of Structure Equation Modeling, while building a model to check the proposed hypothesis. SEM is
more or less an evolving technique in the research, which is expanding to new fields. Moreover, it is providing
new insights to researchers for conducting longitudinal investigations.
.
Innovative sample size methods for adaptive clinical trials webinar web ver...nQuery
View the video here:
https://www.statsols.com/webinar/innovative-sample-size-methods-for-adaptive-clinical-trials
Given the high failure rates and the increased costs of clinical trials, researchers need innovative design strategies to best optimize financial resources and reduce the risk to patients.
Adaptive designs are emerging as a way to reduce risk and cost associated with clinical trials. The FDA recently published guidance (Innovative Cures Act) and are actively encouraging sponsors to use Adaptive trials.
Adaptive design is a clinical trial design that allows adaptations or modifications to aspects of the trial after its initiation without undermining the validity and integrity of the trial.
In this webinar, Ronan will demonstrate nQuery's new Adaptive module focusing on Sample Size Re-Estimation & Group-Sequential Design.
In this webinar you will learn about:
The pros and cons of adaptive designs
Sample Size Re-Estimation
Group-Sequential Design
Conditional Power
Predictive Power
An assessment of the the BER's manufacturing survey in South AfricaGeorge Kershoff
This document analyzes the impact of weight adjustment on the accuracy of business tendency survey (BTS) results in South Africa. It compares BTS results calculated using only firm and sector weights to results calculated with additional ex post weight adjustment. Weight adjustment accounts for non-responses by increasing weights of respondents. The correlation between adjusted-weight results and a reference series is lower than for unadjusted-weight results, suggesting weight adjustment does not improve accuracy. This finding supports the BER's current weighting methodology and indicates BTS results are robust to weighting methods when a business register is unavailable.
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
Assigning Scores For Ordered Categorical ResponsesMary Montoya
This document summarizes a research article that proposes a new method for assigning scores to ordered categorical response variables in statistical analysis. Specifically, it discusses the ordered stereotype model, which allows for uneven spacing between categories of an ordinal variable through estimated score parameters. The article presents simulation studies showing the disadvantages of assuming equal spacing, and applies the ordered stereotype model to a real dataset, demonstrating non-equal spacing. It also proposes a new median measure for ordinal data based on estimated score parameters from the ordered stereotype model.
This document summarizes a research study that developed a discriminant analysis model to classify loan applications as accepted or rejected using ranked data. The study used a sample of 350 loan applications, including variables like credit rating, occupation, loan-to-value ratio, and payment-to-income ratio. Rank transformation was applied to minimize outliers and non-normality. Statistical software was used to generate classification functions and classify applications. The resulting model based on ranked data provided accurate classifications without violating assumptions of traditional discriminant analysis.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
This document summarizes methods for establishing meaningful performance expectations across different test forms by setting invariant latent standards along the underlying competence continuum, rather than cutscores that vary by test content. It describes how Angoff ratings can be analyzed using item response curves to identify the latent threshold (θ*) representing each performance level. Preliminary analyses of expert ratings for a licensure exam show ratings better differentiated item difficulties and performance levels after aligning with item curves, and several methods for deriving θ* from the ratings are demonstrated and compared.
Impact of Perceived Fairness on Performance Appraisal System for Academic Sta...IJSRP Journal
This study investigates the employees’ perception of fairness in the performance appraisal system for academic staff of the General Sir Jhon Kotelawala Defence University.
97_INTER_NC1 INTER BANK ANALYSIS OF COST EFFICIENCY USING MEAN.pdfDR BHADRAPPA HARALAYYA
This document summarizes a study analyzing the cost efficiency of scheduled commercial banks in India from 1995-2013 using data envelopment analysis. The study finds that technical inefficiency stems mainly from poor performance in controlling input misuse and inability to operate at optimal scale. Public sector banks have higher average technical efficiency than private and foreign banks, but all banks have room for improved allocative efficiency by using inputs in optimal proportions. The gap between technical efficiency and inefficiency decreased over time for most banks, indicating improved performance. However, foreign banks consistently outperformed public and private banks in all efficiency measures.
Similar to HST- 0621-151 PAPER ANALYSIS OF BANK PRODUCTIVITY USING PANEL CAUSALITY TEST.pdf (20)
This document provides an overview of electronic payment and e-finance systems in India. It discusses various electronic funds transfer systems used by banks in India such as Real Time Gross Settlement (RTGS), National Electronic Funds Transfer (NEFT), Electronic Clearing System (ECS), Immediate Payment Service (IMPS), and core banking solutions that allow customers to access accounts from any branch. It also mentions communication networks like INFINET that connect banks and the role of SWIFT and other international payment systems in facilitating domestic and international funds transfers.
Emerging Global Strategies for Indian Industry Bhadrappa Haralayya.pdfDR BHADRAPPA HARALAYYA
- The document analyzes the weak form efficiency of the Indian stock market, with a specific focus on the National Stock Exchange (NSE).
- It employs statistical tests like run tests and autocorrelation tests on monthly closing index values from 2000-2013 to analyze randomness and independence of stock price changes over time.
- The results of the statistical tests show that the NSE is inefficient in the weak form, as past stock price data can be used to predict future price movements, violating the random walk hypothesis of weak form efficiency.
E-finance technologies have significantly impacted the financial services industry in several ways:
1. Financial firms have adopted e-finance to automate lending decisions, distribute products online, and reduce costs. However, insurance adoption has been slower due to infrequent customer interactions and complex products.
2. E-finance has increased liquidity of financial assets by reducing information costs. This has led to disintermediation as assets migrate from banks to capital markets. However, the impact on monetary policy and risk-sharing is still unclear.
3. E-finance has driven consolidation in banking through scale economies, but consolidation has been less significant in other financial sectors like insurance and securities.
The document provides 10 tips for improving communication skills: 1) Watch your body language and be aware of how you are communicating non-verbally. 2) Remove filler words from your speech like "um" and "ah". 3) Prepare for small talk conversations by having common topics like family, occupation, recreation, and dreams prepared. 4) Tell stories when communicating to make your message more engaging and persuasive. 5) Ask questions and repeat back what the other person said to show you are listening and to clarify understanding. 6) Minimize distractions when communicating with others. 7) Tailor your message to your specific audience. 8) Keep written and verbal communication brief but ensure all necessary information is included. 9) Develop empathy
1703262 PAPER Impact of Ratio Analysis on Financial Performance in Royal Enfi...DR BHADRAPPA HARALAYYA
This document provides an overview of a research project analyzing the impact of ratio analysis on the financial performance of Royal Enfield Motors in Bidar, India from 2018-2021. It includes an abstract, introduction, company profile of Royal Enfield, theoretical background on ratio analysis and financial performance, statement of the problem, objectives and scope of the study. The methodology used primary and secondary data collection. Tools for analysis included comparative balance sheets and income statements, trend analysis and ratio analysis covering profitability, turnover and solvency ratios. Chapter 4 presents the ratio analysis of Royal Enfield Motors' financial statements for 3 years.
Dr. Alyce Su Cover Story - China's Investment Leadermsthrill
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
What Lessons Can New Investors Learn from Newman Leech’s Success?Newman Leech
Newman Leech's success in the real estate industry is based on key lessons and principles, offering practical advice for new investors and serving as a blueprint for building a successful career.
5 Compelling Reasons to Invest in Cryptocurrency NowDaniel
In recent years, cryptocurrencies have emerged as more than just a niche fascination; they have become a transformative force in global finance and technology. Initially propelled by the enigmatic Bitcoin, cryptocurrencies have evolved into a diverse ecosystem of digital assets with the potential to reshape how we perceive and interact with money.
How to Invest in Cryptocurrency for Beginners: A Complete GuideDaniel
Cryptocurrency is digital money that operates independently of a central authority, utilizing cryptography for security. Unlike traditional currencies issued by governments (fiat currencies), cryptocurrencies are decentralized and typically operate on a technology called blockchain. Each cryptocurrency transaction is recorded on a public ledger, ensuring transparency and security.
Cryptocurrencies can be used for various purposes, including online purchases, investment opportunities, and as a means of transferring value globally without the need for intermediaries like banks.
Fabular Frames and the Four Ratio ProblemMajid Iqbal
Digital, interactive art showing the struggle of a society in providing for its present population while also saving planetary resources for future generations. Spread across several frames, the art is actually the rendering of real and speculative data. The stereographic projections change shape in response to prompts and provocations. Visitors interact with the model through speculative statements about how to increase savings across communities, regions, ecosystems and environments. Their fabulations combined with random noise, i.e. factors beyond control, have a dramatic effect on the societal transition. Things get better. Things get worse. The aim is to give visitors a new grasp and feel of the ongoing struggles in democracies around the world.
Stunning art in the small multiples format brings out the spatiotemporal nature of societal transitions, against backdrop issues such as energy, housing, waste, farmland and forest. In each frame we see hopeful and frightful interplays between spending and saving. Problems emerge when one of the two parts of the existential anaglyph rapidly shrinks like Arctic ice, as factors cross thresholds. Ecological wealth and intergenerational equity areFour at stake. Not enough spending could mean economic stress, social unrest and political conflict. Not enough saving and there will be climate breakdown and ‘bankruptcy’. So where does speculative design start and the gambling and betting end? Behind each fabular frame is a four ratio problem. Each ratio reflects the level of sacrifice and self-restraint a society is willing to accept, against promises of prosperity and freedom. Some values seem to stabilise a frame while others cause collapse. Get the ratios right and we can have it all. Get them wrong and things get more desperate.
Monthly Market Risk Update: June 2024 [SlideShare]Commonwealth
Markets rallied in May, with all three major U.S. equity indices up for the month, said Sam Millette, director of fixed income, in his latest Market Risk Update.
For more market updates, subscribe to The Independent Market Observer at https://blog.commonwealth.com/independent-market-observer.
“Amidst Tempered Optimism” Main economic trends in May 2024 based on the results of the New Monthly Enterprises Survey, #NRES
On 12 June 2024 the Institute for Economic Research and Policy Consulting (IER) held an online event “Economic Trends from a Business Perspective (May 2024)”.
During the event, the results of the 25-th monthly survey of business executives “Ukrainian Business during the war”, which was conducted in May 2024, were presented.
The field stage of the 25-th wave lasted from May 20 to May 31, 2024. In May, 532 companies were surveyed.
The enterprise managers compared the work results in May 2024 with April, assessed the indicators at the time of the survey (May 2024), and gave forecasts for the next two, three, or six months, depending on the question. In certain issues (where indicated), the work results were compared with the pre-war period (before February 24, 2022).
✅ More survey results in the presentation.
✅ Video presentation: https://youtu.be/4ZvsSKd1MzE
Budgeting as a Control Tool in Government Accounting in Nigeria
Being a Paper Presented at the Nigerian Maritime Administration and Safety Agency (NIMASA) Budget Office Staff at Sojourner Hotel, GRA, Ikeja Lagos on Saturday 8th June, 2024.
Navigating Your Financial Future: Comprehensive Planning with Mike Baumannmikebaumannfinancial
Learn how financial planner Mike Baumann helps individuals and families articulate their financial aspirations and develop tailored plans. This presentation delves into budgeting, investment strategies, retirement planning, tax optimization, and the importance of ongoing plan adjustments.
HST- 0621-151 PAPER ANALYSIS OF BANK PRODUCTIVITY USING PANEL CAUSALITY TEST.pdf
1. ANALYSIS OF BANK PRODUCTIVITY USING PANEL CAUSALITY TEST
1.Bhadrappa Haralayya
Post Doctoral Fellowship Research Scholar,
Srinivas University, Mangalore, India.
bhadrappabhavimani@gmail.com
Orcid id-0000-0003-3214-7261
2.P. S. Aithal,
Professor, College of Management and Commerce,
Srinivas University, Mangalore, India.
psaithal@gmail.com
Orcid id-0000-0002-4691-8736
ABSTRACT
To distinguish the wellsprings of varieties in TFP development crosswise over SCBs in India and the
components that can contribute for generally speaking improvement, development and execution of
keeping money part in India, the present investigation completed a causality examination. To look at
such relationship the present commitment has attempted to ascertain easygoing relationship of TFP
records with the money related markers like business per branch, business per worker, NIM/TA, benefit
and ROA.It turns out to be fairly hard to find out the profitability of work specifically. Along these lines,
to comprehend the idea of work efficiency without other persuasive variables like cost of
administrations rendered by the banks could be determined as the proportion of business per
representative and business per branch. The matter of business banks can be determined as total of stores
in addition to credit. The inception of monetary changes and the passage of new private and remote part
banks in India have given impulse of extension of business per worker and business per branch. This has
been centered on methods for working up focused and solid condition in the saving money industry in
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 1 Issue 6
2. India. The extreme challenge enabled banks to improve their activities in the unbanked territories and
trait towards the defense of branches by a few banks. Such methodology helped in the advancement of
new business techniques like sharing of ATMs in order to make a cost effective, labor serious saving
money and beneficial monetary managing an account framework in India.
Keywords: Panel Causality Test, Panel Granger Causality Test, Cross-Section Dependency Test, Panel
Unit Root Test.
1.INTRODUCTION
Be that as it may, NIM determined as contrast between the aggregate premiums earned and add up to
premium exhausted standardized by resources shows the arrangement of assets to produce pay from
task. The nearness of lower proportion portrays beneficial managing an account framework. The
expansion in rivalry for the managing an account segment in India has applied descending weight on the
spread and hence helped the saving money to improve their dimension of efficiency over the period.
This connection among efficiency and NIM ends up essential from a full scale financial perspective
given the way that a decrease in the dimension of efficiency is an antecedent to an abating monetary
development and expanded weight on b gives a sign to the dimension of benefit created per unit of
advantage by the banks in India and it has been expected that higher estimation of the proportion shows
higher gainfulness, and subsequently higher profitability. Then again, to be progressively explicit, the
benefit earned by the banks at individual dimension under various proprietorships after assessments will
mean that enhancement in the proportion will assist the keeps money with decreasing their
intermediation cost and along these lines, helps in expanding their dimension of profitability in the cost
effective way. In this manner, these money related marker exercises highlight the ascent for estimating
the causal association with the dimension of efficiency change crosswise over banks in India over the
timeframe.
2.PANEL CAUSALITY TEST
To test the casual relationship between performance indicators and TFP score, a pair-wise Dumitrescu
Hurlin Panel Causality tests statistics has been estimated after checking the unit root. This approach has
been initiated by the study of Dumitrescu-Hurlin, assuming all coefficients to be different across cross-
sections. This test statistics can be easily computed by simply running standard Granger Causality
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 2 Issue 6
3. regressions approach introduced in Granger for each cross-section individually. In the panel data settings
the commonly used least squares regression can take a number of different assumptions made about the
structure of the panel data.
−1forms, depending upon me period dimension of the panel, and i is the cross-sectional dimension. As
stated earlier also that there are alternative approaches to run causality tests in panel data models.
Therefore, in the present study, the approach proposed by Hurlin and Venet (2011); Hurlin (2014a);
Hurlin (2014b) that treats the autoregressive coefficients and regression coefficient slopes as constant
has been incorporated.
The different forms of panel causality test differ on the assumptions made about the homogeneity of the
coefficients across cross-sections. The first is to treat the panel data as one large stacked set of data, and
then perform the Granger Causality test in the standard way, with the exception of not allowing data
from one cross-section to enter lagged values of data from the next cross-section. This method assumes
that all coefficients are same across all cross-sections,
A second approach adopted by Dumitrescu-Hurlin (2012), makes an extreme opposite thereby, assuming
all coefficients to be different across cross-sections
The test is calculated by simply running standard Granger Causality regressions for each cross-section
individually. The next step is to take the average of the test statistics, which are termed the Wbar
statistic. When the standardized version of this statistic, appropriately weighted in unbalanced panels,
follows a standard normal distribution, it is termed the Zbar statistic. The pairwise Dumitrescu Hurlin
Panel causality tests may indicate which of the hypotheses are generally consistent or inconsistent with
the data
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 3 Issue 6
4. 3.PANEL GRANGER CAUSALITY TEST
The majority of the study that has been consulted to identify the relationship among the financial
variables has focused on the capital market indicators of different countries. In terms of banking
efficiency literature, the relationship among efficiency of banks with management quality, loan quality,
bank capital and competition has been investigated. The present study is an endeavor for re-
establishment of relationship between TFP score and Holtz-Eakins
4.CROSS-SECTION DEPENDENCY TEST
There are different set of cross-section dependency tests to test the null hypothesis of zero dependency
across the panel decision making units. These tests are applicable to the panel ationary and unit root
dynamic heterogeneous panel with structural breaks and are presented with small T (time period) and
large sample (N) across cross-sections. Some of the tests include LM Test CD test statics Friedman’s
test and Frees test Among these test statistics Friedman test statistics, a non-parametric test based on
Spearman’s rank correlation coffecint has been used to estimate the cross-sectional dependency for the
estimates in the present study. The Friedman’s test statistics based on the average Spearman’s
correlation is given as:
Correlation estimates of residuals. Large value of Rave indicates the presence of non-zero cross-sectional
correlations. The Friedman’s test statistics depicts an asymptotically χ2
distribution with t-1 degrees of
freedom, for fixed T and N.
5.PANEL UNIT ROOT TEST
In order to check the stationarity of data set, the present study uses panel unit root test rather than simple
Augmented Dickey Fuller (ADF) test statistics. The Panel unit root tests are although similar, but not
identical, to unit root tests carried out on a time series data.For testing unit root in panel data, two
assumptions can be made i.e., either the persistence parameters are common across cross sections (ρi = ρ
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 4 Issue 6
5. for all i, where, ρi are the autoregressive coefficients, i = 1, 2,…..N cross sections units or series) or ρi
vary freelyacross cross-sections. Therefore, the present study uses individual panel unit root test like Im,
Pesaran, Shin (IPS)”, Fisher-ADF”, Fisher-Philip Peron (PP)” rather than common unit root test i.e.,
Levin, Lin, Chu (LLC) test statistics.The assumptions regarding common unit root indicates that the
tests are estimated assuming common autoregressive structure for all of the series incorporated in the
panel structure. On the other hand, the individual unit root process allows for differentautoregressive
coefficients in each series involved in the panel. IPS begins by specifying the separate ADF regression
across the cross sections:
The null hypothesis regarding this equation can be written as,
H0 = αi = 0, for all i
Whereas the alternative hypothesis for the above equation can be written as
After estimating the separate ADF regressions, the average of the t-statistics for αi from the individual
ADF regressions is adjusted to calculate the desired test statistics. Having the data set of banks at
individual level over the period of time indicates the presence of effect on the operations and other
activities of banks individually and it might not be compulsory that the banks in one cross section is
going to have effect on the banks in the other cross sections over the period of time. Hence, the
appropriate unit root test model for the present study is individual test statistics. In addition to this, IPS
test is also made at individual level because selecting an individual test type helps better control over the
computational method and provides additional detail on the test results.Another important indicator is
regarding the lag values. Hence, for the group or pool unit root test, the automatic selection of lags has
been incorporated which involves information matrix criterion based on the number of lag difference
terms and the Andrews or Newly-West method for bandwidth selection. The null hypothesis for the IPS,
ADF and tests in the present study, includes that the data series of different determinants namely
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 5 Issue 6
6. business per branch, business per employee, ratio of net interest margin to total assets, profit per
employee, profit, return on assets and dTFP score have unit root.
Figure 1.1: Productivity-Profitability Matrix for Public Sector Banks
Figure 1.2: Productivity-Profitability Matrix for Private Sector Banks
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 6 Issue 6
7. Figure 1.3: Productivity-Profitability Matrix for Foreign Sector Banks
So as to decide if there is easygoing relationship among the efficiency change and saving money
execution markers, a couple savvy Dumitrescu-Hurlin Panel Causality tests has been utilized. Be that as
it may, before continuing for the test, it is important to look at the cross-sectional reliance (CD) and
stationarity of the information fused in the present undertaking. On the off chance that the information
gives off an impression of being non-stationary, the typical asymptotic test measurements for the board
causality stays invalid. In this manner, it ends up indispensable to show such informational index into
the stationary shape and guarantee their dimension of stationarity before continuing further
The cross-segment reliance test proposed by Friedman (1937) has been utilized to test the invalid theory
of zero reliance over the board basic leadership units. It is essentially required on the off chance that
there is T (timespan) little and N (test estimate in cross area) expansive information. It is further to be
kept in notice that the test for cross-sectional reliance neglects to dismiss the invalid theory when there
is nearness of dynamic panle information with zero-mean in cross-sectional measurements. Friedman's
test, a non-parametric test based Spearman's rank relationship coefficient has been utilized to estimates
the cross-sectional reliance test for the estimates. The outcomes from Table 1.1 propose that the invalid
theory of cross-area reliance is dismissed by utilizing Friedman tests measurements. This implies for
bank aggregate there is no cross-sectional reliance, any stun in one bank in a cross-area can't be
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 7 Issue 6
8. transmitted to another. In this way, the outcome uncovers that χ2 insights isn't measurably noteworthy. It
prompts the acknowledgment of invalid speculation. Thus, there does not seem any cross-sectional
reliance for the board information over the markers utilized in the proposed model.
Further to check the stationarity of informational index, the present examination utilizes board unit root
test as opposed to basic ADF test insights. The investigation includes singular board unit root test like
Im, Pesaran, Shin", Fisher - ADF", Fisher - PP". The suppositions with respect to normal unit root
assigns that the tests are evaluated expecting a typical AR structure for the majority of the
seriesincorporated in the board structure, while then again, the individual unit pull process endorse for
various AR coefficients in every arrangement engaged with the board. Having the informational index
of banks at individual dimension over the timeframe ensure the nearness of impact on the tasks and
different exercises of banks independently and it may not be vital that the banks in a single cross
segment will affect the banks in the traverse the timeframe. Thus the suitable unit root test demonstrate
in the present investigation is singular test insights. Likewise, the utilization of Im, Pesaran, Shin test is
likewise made at individual dimension simply because choosing an individual test type permits you
better command over the computational technique and gives extra detail on the test outcomes. Another
critical pointer is with respect to the slack qualities. Henceforth, for the gathering or pool unit root test,
the programmed determination of slacks has been consolidated that includes the data grid foundation
based for the quantity of slack distinction terms and the Andrews or Newey-West strategy for transfer
speed choice. Thusly, the slack qualities present in the insights were based on defaults esteems.
Table 1.1: Cross-Sectional Dependence Test for Panel Data
Note: Friedman's test statisticdistributionwithT-1showeddegreesoffreedoman asymptotic Source:
Authors' estimations The invalid theory of the board unit root test is that the factors associated with the
insights are having the unit root. The test insights for every one of the factors in the example are
appeared in the Table 1.1. The test outcomes affirm the nearness of stationarity for the informational
collection everything being equal, along these lines, dismissing the invalid speculation of unit root. At
the end of the day, the affirmation of Im, Pesaran and Shin, W-detail; ADF-Fisher, Chi-square; PP-
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 8 Issue 6
9. Fisher, Chi-square measurements not exactly the basic estimation of 1, 5 and 10 percent huge dimension
validate the dismissal of invalid speculation, in this way, affirming the nonappearance of unit root. The
outcomes delineates that the all the variable specifically BS/BRANCH, BUS/EMP, DTFP, NIM/TA,
PROFIT and ROA consolidated in the model are stationary at level and in this way can be utilized for
further vadaition of connections. The outcome further affirms that there is no critical pattern in their
time development too. The probabilities for Fisher tests are figured utilizing an asymptotic Chi-square
dispersion and every other test expect asymptotic ordinariness.
Presently, subsequent to looking at unit root, the investigation has continued to do the estimates with
respect to the causality test. To test the easygoing connection among dTFP and different profitability
pointers, a couple savvy Dumitrescu-Hurlin Panel Causality tests insights has been evaluated. The
methodology started by the investigation of Dumitrescu-Hurlin (2012), enables all coefficients to be
distinctive crosswise over cross-areas. This test measurements can be effortlessly processed by basically
running standard Granger Causality relapses approach presented in Granger (1969) for each cross-area
exclusively. In the board information settings the usually utilized minimum squares relapses can take
various diverse structures, contingent on suppositions made about the structure of the board information.
Since, Granger Causality is processed by running bi-variate relapses there are various distinctive ways to
deal with testing for Granger Causality in a board setting.
The distinctive types of board causality test contrast on the presumptions made about the homogeneity
of the coefficients crosswise over cross-segments. The two methodologies are again featured to test the
causality test in board informational index. The principal approach regards information as one expansive
staked informational index and plays out the Granger causality test in such a route by not letting the
information to enter from one cross segment to the slacked estimations of information from next cross
segment. Thus, it accept all coefficients same over every single cross-area. Then again, the second
methodology embraced by Dumitrescu-Hurlin (2012), makes an extraordinary inverse supposition,
enabling all coefficients to be diverse crosswise over cross-segments. Subsequently, the present
examination utilizes the second way to deal with direct the causality test as this test is determined by
essentially running standard Granger Causality relapses for each cross-segment independently.
To affirm the legitimacy of investigation factually the normal of the test measurements named as Wbar
measurement must be thought about. These insights delineates that the institutionalized variant that are
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 9 Issue 6
10. properly weighted in lopsided boards, pursues a standard typical dissemination and such conveyance is
measured with help of Zbar measurement. To check the solidness of results while estimating the
easygoing relationship among factors, the present investigation evaluated the incentive at period slack
which is self-assertive by and by. From the Table 1.3, it very well may be finished up by the estimates
that for match savvy Dumitrescu-Hurlin test, the invalid speculation that Business per branch does not
homogeneously cause dTFP, dTFP does not homogeneously cause Business per branch; Business per
representative does not homogeneously cause dTFP, dTFP does not homogeneously cause Business per
worker; NIM/TA does not homogeneously cause dTFP, dTFP does not homogeneously cause NIM/TA;
Profit does not homogeneously cause dTFP; dTFP does not homogeneously cause Profit; ROA does not
homogeneously cause dTFP and dTFP does not homogeneously cause ROA is dismissed in every one of
the cases.
For making the investigation progressively reasonable, the estimates have been determined at the higher
slacks additionally and it has been uncovered from the outcomes that there is a measurably huge and
reciprocal directional connection between these budgetary markers at 1 percent dimension of criticalness
with the exception of the ROA and dTFP which seems to delineate the noteworthy bi-directional
relationship at 5 percent dimension of noteworthiness. The bi-directional connection between TFP score
and different pointers of bank execution are very clear.
The outcomes presume that higher profitability demonstrates a solid economy and incites feel great
opinions among the planned business banks in India that are giving distinctive sort of administrations to
their clients. Further the outcomes from the Table 1.3 features positive coefficient for all the profitability
markers, along these lines, demonstrating that dTFP scores emphatically cause the efficiency pointers
and the other way around. The essentialness of the measurements for the first and second slacks of
profitability development appear to propose that adjustment in the dimension of efficiency is influenced
altogether by earlier years profitability score and by pointers of efficiency development. The outcomes
from the Table 1.3 propose that the general advancement and development of managing an account
division prompts the enhancement in the general business and productivity of banks. Then again, the
gainfulness, business per representative, enhancement in the net premium edge leads towards the
improvement in the dimension of efficiency change and its separate parts for the managing an account
division in India. In addition, the bi-directional connection between the business/representative,
business/branch, NIM/TA with dTFP delineates factually solid relationship as evaluated from the given
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 10 Issue 6
11. measurements.
Table 1.2: Summary Statistics of Panel Unit Root Test
The cautious comparision of the acquired Panel Granger Causality and the W and Zbar measurements
permits in the present examination to infer that the causality running from the pointers to add up to
factor profitability and the other way around are plainly positive, in this way, featuring the nearness of
sound impact of separate markers on the aggregate factor efficiency change in saving money segment of
India. Generally speaking, the outcomes appear to recommend causality running from profitability
pointers to dTFP scores is moderately solid. In this way, the outcomes infer that there is have to make
more quickening in efficiency which is related with significant markers of saving money industry in
India to defeat from the relapse looked by industry over the timeframe. To close the dialog, it very well
may be uncovered from the investigation thatx efficiency pointers like business per branch, business per
representative, benefit and profit for resources are subject to the dimension of profitability amid the time
of concentrate as the insights has all the earmarks of being on higher side. Chi-square dispersion. Every
single other test accept asymptotic typicality. Programmed slack length choice dependent on SIC: 0 to 3.
Newey-West programmed transfer speed choice and Bartlett portion
To close the dialog, it very well may be uncovered from the investigation thatx efficiency pointers like
business per branch, business per representative, benefit and profit for resources are subject to the
dimension of profitability amid the time of concentrate as the insights has all the earmarks of being on
higher side.
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 11 Issue 6
12. Table 1.3: Pair-Wise Dumitrescu-Hurlin Panel Causality Test
Null Hypothesis:
Lags: 1 Lags: 2
W-Stat
Zbar-
Stat
Prob W-Stat
Zbar-
Stat
Prob
Business/Branch does not
homogeneously cause dTFP
1.831 2.563 0.01 3.442 2.224 0.026
dTFP does not
homogeneously cause
Business/Branch
55.995 216.34 0 56.583 127.616 0
Bus/Employee does not
homogeneously cause dTFP
3.546 9.331 0 5.536 7.164 0
dTFP does not
homogeneously cause
Bus/Employee
78.391 304.681 0 73.502 167.54 0
NIM/TA does not
homogeneously cause dTFP
2.977 7.084 0 5.528 7.146 0
dTFP does not
homogeneously cause
NIM/TA
162.249 635.63 0 208.453 485.978 0
Profit does not
homogeneously cause dTFP
1.929 2.95 0.003 3.102 2.42 0.021
dTFP does not
homogeneously cause Profit
6.517 21.054 0 7.774 12.444 0
ROA does not
homogeneously cause dTFP
2.568 2.525 0.072 3.371 2.057 0.039
dTFP does not
homogeneously cause ROA
199.302 781.822 0 6.221 8.78 0
The following hypothesis has been used in the present study to empirically examine the relationship:
H0: Business per Branch does not homogeneously cause dTFP
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 12 Issue 6
13. H0: dTFP does not homogeneously cause Business per Branch
H0: Business per Employee does not homogeneously cause dTFP
H0: dTFP does not homogeneously cause Business per EmployeeH0: NIM/TA does not homogeneously
cause dTFPH0: dTFP does not homogeneously cause NIM/TA
H0: Profit per Employee does not homogeneously cause dTFP
H0: dTFP does not homogeneously cause Profit per Employee
H0: ROA does not homogeneously cause dTFPH0: dTFP does not homogeneously cause ROA
6.CONCLUSION
The positive connection between profitability score and different markers of bank execution are very
clear. The outcomes infer that higher efficiency demonstrates sound economy and actuates feel great
notions among business banks in India that are giving diverse kind of administrations to their clients.
The positive and reciprocal relationship affirmed by Dumitrescu-Hurlin Panel causality test between
profitability score and execution markers of efficiency demonstrates a solid economy and instigates
feeling of soundness among business banks in India while giving administrations to their clients.The
banks working close to the unhindered outskirts are not working at the MPSS and in this way, need to
address their scale measure to work at the ideal scale of generation. Consequently, on a normal, dRISE
means that while holding the input and yield blend settled and enabling the dimension to differ banks
even in the wake of changing the scale of activity are appearing and are working underneath the
dimension of MPSS. At last, the banks in India need to concentrate on improving their scale measure,
defeat their significant scale wasteful aspects in order to work at the ideal effective wilderness.
REFERENCES
1. BHADRAPPA HARALAYYA , P.S.AITHAL ,STUDY ON PRODUCTIVE EFFICIENCY
OF BANKS IN DEVELOPING COUNTRY, International Research Journal of Humanities
and Interdisciplinary Studies (www.irjhis.com), ISSN : 2582-8568, Volume: 2, Issue: 5, Year:
May 2021, Page No : 184-194. Available at : http://irjhis.com/paper/IRJHIS2105025.pdf
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 13 Issue 6
14. 2. Bhadrappa Haralayya ; P. S. Aithal . "Study on Model and Camel Analysis of Banking" Iconic
Research And Engineering Journals Volume 4 Issue 11 2021 Page 244-259.
3. Haralayya, Dr. Bhadrappa and Saini, Shrawan Kumar, An Overview on Productive Efficiency
of Banks & Financial Institution (2018). International Journal of Research, Volume 05 Issue
12, April 2018, Available at SSRN: https://ssrn.com/abstract=3837503
4. Haralayya, Dr. Bhadrappa, Review on the Productive Efficiency of Banks in Developing
Country (2018). Journal for Studies in Management and Planning, Volume 04 Issue 05, April
2018, Available at SSRN: https://ssrn.com/abstract=3837496
5. Antoniou, A., Pescetto, G., and Violaris, A. (2001). "Modeling lntemational Price
Relationships and Interdependencies between EU Stock Index and Stock Index Futures
Markets: A Multivariate Analysis," Working Paper, Centre for Empirical Research in Finance,
Department of Economics and Finance, University of Durham.
6. BHADRAPPA HARALAYYA, P.S.AITHAL, FACTORS DETERMINING THE
EFFICIENCY IN INDIAN BANKING SECTOR : A TOBIT REGRESSION ANALYSIS",
International Journal of Science & Engineering Development Research (www.ijsdr.org),
ISSN:2455-2631, Vol.6, Issue 6, page no.1 - 6, June-2021, Available
:http://www.ijsdr.org/papers/IJSDR2106001.pdf
7. BHADRAPPA HARALAYYA, P.S.AITHAL, STUDY ON PRODUCTIVE EFFICIENCY
OF FINANCIAL INSTITUTIONS, International Journal of Innovative Research in
Technology ISSN: 2349-6002, Volume 8, Issue 1, Page no: 159 – 164, June-2021 Available:
http://ijirt.org/master/publishedpaper/IJIRT151514_PAPER.pdf
8. BHADRAPPA HARALAYYA , STUDY OF BANKING SERVICES PROVIDED BY
BANKS IN INDIA, International Research Journal of Humanities and Interdisciplinary
Studies (www.irjhis.com), ISSN : 2582-8568, Volume: 2, Issue: 6, Year: June 2021,Page No
: 06-12, Available at : http://irjhis.com/paper/IRJHIS2106002.pdf.
9. BHADRAPPA HARALAYYA, P.S.AITHAL , ANALYSIS OF BANK PERFORMANCE
USING CAMEL APPROACH", International Journal of Emerging Technologies and
Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.8, Issue
5, page no. ppg305-g314, May-2021, Available at
: http://www.jetir.org/papers/JETIR2105840.pdf
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 14 Issue 6
15. 10. African Union (2015), "Study of the Potential for Commodity Exchanges and Other
Forms of Market Places in Eastern and Southern Africa", 2"d Extraordinary Session of
the Conference of Ministers of Trade, United Republic of Tanzania.
11. Bhadrappa Haralayya,Retail Banking Trends in India ,International Journal of All Research
Education and Scientific Methods (IJARESM),ISSN:2455-6211, Volume: 9, Issue: 5, Year:
May 2021, Page No : 3730-3732.
12. Basha, Jeelan and Haralayya, Dr. Bhadrappa, Performance Analysis of Financial Ratios -
Indian Public Non-Life Insurance Sector (April 30, 2021). Available at
SSRN: https://ssrn.com/abstract=3837465.
13. Haralayya, Dr. Bhadrappa, The Productive Efficiency of Banks in Developing Country With
Special Reference to Banks & Financial Institution (april 30, 2019). Available at
SSRN: https://ssrn.com/abstract=3844432 or http://dx.doi.org/10.2139/ssrn.3844432
14. Ali, J. and Gupta, K.B. (2011), "Efficiency in agricultural commodity futures markets in
India: Evidence from cointegration and causality tests", Agricultural Finance Review, Vol. 71,
Issue 2, pp.162 — 178.
15. Amikmd, Y. and Mendelson, H. (1993), "Transaction taxes and stock values," in Lehn, K and
Kamphius R. eds, Modernizing US Securities Regulation: Economic and Legal Perspectives,
Irwin, Homewood, Illinois.ntoniou, A. and Ergul, N. (t997). "Market Efficiency, Thin Trading
and Non-linear Behavior: Evidence from an Emerging Market," European Financial
Management, Vol. 3, pp 175-90.
16. Haralayya, Dr. Bhadrappa, Study on Performance of Foreign Banks in India (APRIL 2, 2016).
Available at
SSRN: https://ssrn.com/abstract=3844403 or http://dx.doi.org/10.2139/ssrn.3844403
17. Haralayya, Dr. Bhadrappa, E-Finance and the Financial Services Industry (MARCH 28,
2014). Available at
SSRN: https://ssrn.com/abstract=3844405 or http://dx.doi.org/10.2139/ssrn.3844405
18. Haralayya, Dr. Bhadrappa, E-payment - An Overview (MARCH 28, 2014). Available at
SSRN: https://ssrn.com/abstract=3844409 or http://dx.doi.org/10.2139/ssrn.3844409 .
19. Boitumelo, M., & Narayana, N. 2010 . The Performance of Financial Institutions in
Bostwana: A study of selected Banking and Non-Banking Financial Institutions. Asian-
African Journal of Economics and Econometrics ,10-15
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 15 Issue 6
16. 20. Chaudhari, S., & Tripathy, A. 2013 . Measuring bank performance: An application of
DEA. Prajnan , XXXII : 287-304
21. Chavan, J. 2013 . Internet Banking-Benefits and Challenges in an Emerging Economy.
International Journal of Research in Business Management , 1 : 19-26.
22. Gani, A., & Bhatt, M. 2013 . Service Quality in Commercial Banks: A Comparative
Study. Paradigm , VII : 24-36
23. Goyal, R., & Kaur, R. 2008 . Performance of New Private Sector Banks in India. The
Indian Journal of Commerce , 611-11.
24. Agarwal, M., Athanasios, G. N., & Kusum, W. K. 2013. An Analysis of Efficiency and
Productivity Growth of the Indian Banking Sector. Finance India , XVII : 511-521
Journal of Huazhong University of Science and Technology ISSN-1671-4512
Vol 50 16 Issue 6