This document summarizes an approach to instruction level parallelism using prediction by partial matching (PPM) branch prediction. It proposes a hybrid PPM-based branch predictor that uses both local and global branch histories. The two predictors are combined using a neural network. Key aspects of the implementation include:
1. Using local and global history PPM predictors and combining their predictions with a neural network.
2. Enhancements to the basic PPM approach like program counter tagging, efficient history encoding using run-length encoding, tracking pattern bias, and dynamic pattern length selection.
3. Details of the global history PPM predictor including the use of tables and linked lists to store patterns of different lengths and handle collisions
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
Results of Fitted Neural Network Models on Malaysian Aggregate DatasetjournalBEEI
This result-based paper presents the best results of both fitted BPNN-NAR and BPNN-NARMA on MCCI Aggregate dataset with respect to different error measures. This section discusses on the results in terms of the performance of the fitted forecasting models by each set of input lags and error lags used, the performance of the fitted forecasting models by the different hidden nodes used, the performance of the fitted forecasting models when combining both inputs and hidden nodes, the consistency of error measures used for the fitted forecasting models, as well as the overall best fitted forecasting models for Malaysian aggregate cost indices dataset.
An improved graph drawing algorithm for email networksZakaria Boulouard
This document proposes an improved graph drawing algorithm for email networks. It first formulates the graph drawing problem as a minimization problem to optimize for aesthetic criteria like evenly distributing vertices and minimizing edge lengths. It then describes a genetic algorithm approach to solve this optimization problem. Specifically, it improves the algorithm by taking into account the small-world properties of email networks, like placing highly connected vertices in the center and ignoring long-range repulsive forces. The results show this approach draws graphs in a more intuitive and aesthetic way while also improving runtime over traditional force-directed algorithms.
2014_IJCCC_Handwritten Documents Text Line Segmentation based on Information ...Mihai Tanase
This document summarizes a research paper on segmenting text lines in handwritten documents. It proposes using an "information energy" approach, where each pixel is assigned an energy value based on its information content. Pixels with high variation, like those in text, have high energy, while uniform pixels like spaces between lines have low energy. An algorithm computes the energy map while accounting for varying text direction. It then identifies low-energy pixels separating lines. The method is tested on 500 documents, showing good results. Future work could evaluate the algorithm on larger datasets and optimize its computational complexity.
Brown’s Weighted Exponential Moving Average Implementation in Forex ForecastingTELKOMNIKA JOURNAL
In 2016, a time series forecasting technique which combined the weighting factor calculation formula found in weighted moving average with Brown’s double exponential smoothing procedures had been introduced. The technique is known as Brown’s weighted exponential moving average (B-WEMA), as a new variant of double exponential smoothing method which does the exponential filter processes twice. In this research, we will try to implement the new method to forecast some foreign exchange, or known as forex data, including EUR/USD, AUD/USD, GBP/USD, USD/JPY, and EUR/JPY data. The time series data forecasting results using B-WEMA then be compared with other conventional and hybrid moving average methods, such as weighted moving average (WMA), exponential moving average (EMA), and Brown’s double exponential smoothing (B-DES). The comparison results show that B-WEMA has a better accuracy level than other forecasting methods used in this research.
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...IRJET Journal
This document proposes four new low complexity histogram generation algorithms to reduce the implementation complexity of histogram generators used in image processing applications. The existing histogram generator architectures require a large number of registers and counters due to 256 possible pixel intensity levels. The proposed algorithms reduce this complexity by approximating pixel values that differ by 1-4 levels to a single value. This is done by truncating 1-4 least significant bits of pixel values. Simulation results show the proposed algorithms generate identical histograms while significantly reducing the number of required registers and counters, lowering implementation complexity, power consumption, and delay compared to conventional designs.
IRJET- Evidence Chain for Missing Data Imputation: SurveyIRJET Journal
This document summarizes research on techniques for imputing missing data. It begins with an abstract describing a new method called Missing value Imputation Algorithm based on an Evidence Chain (MIAEC) that provides accurate imputation for increasing rates of missing data using a chain of evidence. It then reviews several existing imputation techniques including mean, regression, likelihood-based methods, and nearest neighbor approaches. The document proposes extending MIAEC with MapReduce for large-scale data processing. Key advantages of MIAEC include utilizing all relevant evidence to estimate missing values and ability to process large datasets.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
Results of Fitted Neural Network Models on Malaysian Aggregate DatasetjournalBEEI
This result-based paper presents the best results of both fitted BPNN-NAR and BPNN-NARMA on MCCI Aggregate dataset with respect to different error measures. This section discusses on the results in terms of the performance of the fitted forecasting models by each set of input lags and error lags used, the performance of the fitted forecasting models by the different hidden nodes used, the performance of the fitted forecasting models when combining both inputs and hidden nodes, the consistency of error measures used for the fitted forecasting models, as well as the overall best fitted forecasting models for Malaysian aggregate cost indices dataset.
An improved graph drawing algorithm for email networksZakaria Boulouard
This document proposes an improved graph drawing algorithm for email networks. It first formulates the graph drawing problem as a minimization problem to optimize for aesthetic criteria like evenly distributing vertices and minimizing edge lengths. It then describes a genetic algorithm approach to solve this optimization problem. Specifically, it improves the algorithm by taking into account the small-world properties of email networks, like placing highly connected vertices in the center and ignoring long-range repulsive forces. The results show this approach draws graphs in a more intuitive and aesthetic way while also improving runtime over traditional force-directed algorithms.
2014_IJCCC_Handwritten Documents Text Line Segmentation based on Information ...Mihai Tanase
This document summarizes a research paper on segmenting text lines in handwritten documents. It proposes using an "information energy" approach, where each pixel is assigned an energy value based on its information content. Pixels with high variation, like those in text, have high energy, while uniform pixels like spaces between lines have low energy. An algorithm computes the energy map while accounting for varying text direction. It then identifies low-energy pixels separating lines. The method is tested on 500 documents, showing good results. Future work could evaluate the algorithm on larger datasets and optimize its computational complexity.
Brown’s Weighted Exponential Moving Average Implementation in Forex ForecastingTELKOMNIKA JOURNAL
In 2016, a time series forecasting technique which combined the weighting factor calculation formula found in weighted moving average with Brown’s double exponential smoothing procedures had been introduced. The technique is known as Brown’s weighted exponential moving average (B-WEMA), as a new variant of double exponential smoothing method which does the exponential filter processes twice. In this research, we will try to implement the new method to forecast some foreign exchange, or known as forex data, including EUR/USD, AUD/USD, GBP/USD, USD/JPY, and EUR/JPY data. The time series data forecasting results using B-WEMA then be compared with other conventional and hybrid moving average methods, such as weighted moving average (WMA), exponential moving average (EMA), and Brown’s double exponential smoothing (B-DES). The comparison results show that B-WEMA has a better accuracy level than other forecasting methods used in this research.
A Novel Low Complexity Histogram Algorithm for High Performance Image Process...IRJET Journal
This document proposes four new low complexity histogram generation algorithms to reduce the implementation complexity of histogram generators used in image processing applications. The existing histogram generator architectures require a large number of registers and counters due to 256 possible pixel intensity levels. The proposed algorithms reduce this complexity by approximating pixel values that differ by 1-4 levels to a single value. This is done by truncating 1-4 least significant bits of pixel values. Simulation results show the proposed algorithms generate identical histograms while significantly reducing the number of required registers and counters, lowering implementation complexity, power consumption, and delay compared to conventional designs.
IRJET- Evidence Chain for Missing Data Imputation: SurveyIRJET Journal
This document summarizes research on techniques for imputing missing data. It begins with an abstract describing a new method called Missing value Imputation Algorithm based on an Evidence Chain (MIAEC) that provides accurate imputation for increasing rates of missing data using a chain of evidence. It then reviews several existing imputation techniques including mean, regression, likelihood-based methods, and nearest neighbor approaches. The document proposes extending MIAEC with MapReduce for large-scale data processing. Key advantages of MIAEC include utilizing all relevant evidence to estimate missing values and ability to process large datasets.
This document summarizes a research paper that analyzes code performance and system simulation for high-speed communication links. It classifies error mechanisms in these systems into three regimes: large-noise, worst-case dominant, and large-set dominant. It develops new methods for simulating coded high-speed links tailored to each regime. These include a more accurate framework for classifying codes, efficient algorithms for computing error probabilities, and guidelines for biasing simulation parameters to capture behavior at low error rates.
IRJET-An Effective Strategy for Defense & Medical Pictures Security by Singul...IRJET Journal
The document proposes an effective strategy for encrypting medical and defense images using singular value decomposition (SVD) and fractional Fourier transform (FrFT). It discusses generating shares of an image by applying FrFT to the SVD components, specifically the S matrix of singular values. Multiple shares are created using a sharing matrix. The strategy aims to securely transmit images over unreliable networks. Quantitative analysis shows the approach encrypts images quickly, with high sensitivity to changes and resistance to differential attacks, making it effective for encrypting sensitive images.
This document presents a method for detecting Devnagari text in scene images. The method uses two main characteristics of Devnagari text: uniform stroke width and the presence of a headline with vertical strokes below it. Candidate text regions are identified using distance transforms to verify uniform stroke width. A probabilistic Hough transform is then used to detect horizontal lines in each region, which are analyzed to identify headlines indicating Devnagari text. The method was tested on 10,000 images and achieved a precision of 0.7994 and recall of 0.778 for Devnagari text detection, representing an improvement over previous work. Some limitations are noted and future work is proposed to address them through machine learning approaches.
IRJET- An Improved 2LSB Steganography Technique for Data TransmissionIRJET Journal
This document proposes an improved 2LSB steganography technique for hiding data in digital images. The technique embeds message bits randomly into the least significant bit planes of pixels in an RGB image. It uses the two least significant bits of the red channel to indicate whether even or odd parity bits of the message will be embedded in the green and blue channels. The random embedding of bits and parity checks makes the hidden message difficult to detect. Experimental results show the technique can hide message bits in 65-74% of pixels while maintaining good image quality with PSNR values over 30dB. The technique aims to provide higher data hiding capacity and security compared to standard 2LSB steganography.
An Efficient Algorithm to Calculate The Connectivity of Hyper-Rings Distribut...ijitcs
The aim of this paper is develop a software module to test the connectivity of various odd-sized HRs and attempted to answer an open question whether the node connectivity of an odd-sized HR is equal to its degree. We attempted to answer this question by explicitly testing the node connectivity's of various oddsized HRs. In this paper, we also study the properties, constructions, and connectivity of hyper-rings. We usually use a graph to represent the architecture of an interconnection network, where nodes represent processors and edges represent communication links between pairs of processors. Although the number of edges in a hyper-ring is roughly twice that of a hypercube with the same number of nodes, the diameter and the connectivity of the hyper-ring are shorter and larger, respectively, than those of the corresponding hypercube. These properties are advantageous to hyper-ring as desirable interconnection networks. This paper discusses the reliability in hyper-ring. One of the major goals in network design is to find the best way to increase the system’s reliability. The reliability of a distributed system depends on the reliabilities of its communication links and computer elements
IRJET- Study of Topological Analogies of Perfect Difference Network and Compl...IRJET Journal
This document analyzes the topological properties of complete graphs and perfect difference networks (PDNs). It shows that a complete graph can be derived from a system of perfect difference sets (PDSs) by taking the union of the PDN and the graph of missing links. Specifically:
1) The total number of missing links in a complete graph derived from a PDS system of order n is (n2 - n)/2.
2) The total number of links in a complete graph is equal to the total number of links in the corresponding PDN plus the missing links.
3) A complete graph can be constructed by combining the PDN derived from a PDS with the graph of the missing links
Centrality Prediction in Mobile Social NetworksIJERA Editor
By analyzing evolving centrality roles using time dependent graphs, researchers may predict future centrality values. This may prove invaluable in designing efficient routing and energy saving strategies and have profound implications on evolving social behavior in dynamic social networks. In this paper, we propose a new method to predict centrality values of nodes in a dynamic environment. The proposed method is based on calculating the correlation between current and past measure of centrality for each corresponding node, which is used to form a composite vector to represent the given state of centralities. The performance of the proposed method is evaluated through simulated predictions on data sets from real mobile networks. Results indicate significantly low prediction error rate occurs, with a suitable implementation of the proposed method.
IRJET- Text based Deep Learning for Stock PredictionIRJET Journal
This document presents a proposed system for text-based deep learning stock prediction and interpretation. It discusses using a neural network model to extract relevant predictive factors from news texts and financial tweets that influence stock prices. An interactive visualization interface is explored to effectively communicate the interpreted model predictions to users. The system aims to help with stock market investment and analysis tasks. It evaluates the approach on two case studies predicting stock prices from online news and tweets, finding the proposed neural network architecture outperforms other models.
An enhanced difference pair mapping steganography method to improve embedding...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
In this project, we use leverage of centrality models for extracting the importance
of network graph in some determined topologies. The aim is to have scrutinizing
and analyzing the centralities in different network topologies. Three type of centrality
that are used in this project are Betweenness, Closeness and eigenvector
one. Moreover, we have show the results of this comparison in the experimental
results. Besides, we have extend the results of our experimental works for real
world problems. The Results of this part are grasped with visualization plots for
some centralities measurements clearly.
Advanced compression technique for random dataijitjournal
This paper proposes a technique to compress any data irrespective of it’s type. Compressing random data especially has always proved to be a difficult task. With very less patterns and logic within the data it quickly reaches a point where no more data can be represented within a given number of bits. The proposed technique will allow us to compress irrespective of it’s pattern or logic within, and represent it. Further it will permit the technique to be again applied to the already compressed data without having any change in the possible compression ratio. While data can’t be compressed further than a limit the technique will rely on representing the data as a position in any computationally easy number series that extends to infinity provided it have high enough deviation among it’s digits. Only few markers that are linked to the position is saved, rather than representing the original data. The system then use these markers to guess the position and derive the data from it. The procedure is however, computationally intensive and as of now can raise questions of data corruption but with more computing power, efficient algorithm and proper data integrity checks it will be able to provide very high compression ratios in the future
Simulated annealing for location area planning in cellular networksgraphhoc
LA planning in cellular network is useful for minimizing location management cost in GSM network. In fact, size of LA can be optimized to create a balance between the LA update rate and expected paging rate within LA. To get optimal result for LA planning in cellular network simulated annealing algorithm is used. Simulated annealing give optimal results in acceptable run-time
Exploring optimizations for dynamic PageRank algorithm based on GPU : V4Subhajit Sahu
This is my comprehensive viva report version 4.
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Graph is a generic data structure and is a superset of lists, and trees. Binary search on sorted lists can be interpreted as a balanced binary tree search. Database tables can be thought of as indexed lists, and table joins represent relations between columns. This can be modeled as graphs instead. Assignment of registers to variables (by compiler), and assignment of available channels to a radio transmitter and also graph problems. Finding shortest path between two points, and sorting web pages in order of importance are also graphs problems. Neural networks are graphs too. Interaction between messenger molecules in the body, and interaction between people on social media, also modeled as graphs.
IRJET- Error Reduction in Data Prediction using Least Square Regression MethodIRJET Journal
This document proposes a modification to the least squares regression method to reduce errors in data prediction. It divides the original data set into three parts, uses the first part to make predictions with least squares regression and fits those predictions to the second part of the data to minimize errors. It then validates the model on the third part of data and compares errors to the original least squares method. The proposed method shows reduced errors in prediction based on mean absolute error, mean relative error and root mean square error metrics in most test ranges of the validation data.
This document proposes an efficient approach for processing subgraph matching queries with set similarity (SMS2 queries) in large graph databases. The approach uses a "filter-and-refine" framework with offline indexing and online query processing. In the filtering phase, it builds an inverted lattice index of frequent element set patterns and encodes vertices as signatures. It then applies set similarity and structure-based pruning techniques. In the refinement phase, it uses a dominating set-based subgraph matching algorithm to find matching subgraphs guided by a dominating set selection method. Experimental results show the proposed approach outperforms state-of-the-art methods by an order of magnitude.
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
Query processing strategies in distributed databaseShreerajKhatiwada
This document discusses query processing strategies in distributed databases. It describes three steps to query processing: parsing and translation, optimizing the query, and evaluating the query. For distributed databases, query processing has four stages: query decomposition, data localization, global optimization, and local optimization. Distributed query optimization aims to find efficient execution strategies by considering access methods, join criteria, and transmission costs. Different optimization algorithms are used depending on whether minimizing response time or total time is the goal.
Big 5 ASEAN capital markets forecasting using WEMA methodTELKOMNIKA JOURNAL
ASEAN through ASEAN Economics Community (AEC) 2020 treaty has proposed financial integration via capital markets integration in order to aim comprehensive ASEAN economic integration. Therefore, the need to have a proper prediction of ASEAN capital market becomes a major issue. In this study, we took big 5 ASEAN capital markets, i.e. Straits Times Index (STI), Kuala Lumpur Stock Exchange (KLSE), Stock Exchange of Thailand (SET), Jakarta Stock Exchange (JKSE), and Philippine Stock Exchange (PSE) to be forecasted using WEMA method. Weighted Exponential Moving Average (WEMA) is a new hybrid moving average method which combines the weighting factor calculation in Weighted Moving Average (WMA) with the procedure of Exponential Moving Average (EMA). WEMA has successfully been implemented and used to forecaste discrete time series data, but never being used to forecast ASEAN capital markets. In this study, we took further action by implementing the WEMA method with brute force approach for scaling factor tuning on big 5 ASEAN capital markets. From the experimental results, we found that WEMA has successfully forecasted all those exchanges. By looking at the forecast error measurement, it gives the best performance on PSE and worst performance on SET dataset among all datasets being considered in this study.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
An Optimized Parallel Algorithm for Longest Common Subsequence Using Openmp –...IRJET Journal
This document summarizes research on developing parallel algorithms to optimize solving the longest common subsequence (LCS) problem. LCS is commonly used for sequence comparison in bioinformatics. Traditional sequential dynamic programming algorithms have complexity of O(mn) for sequences of lengths m and n. The document reviews parallel algorithms developed using tools like OpenMP and GPUs like CUDA to reduce computation time. It proposes the authors' own optimized parallel algorithm for multi-core CPUs using OpenMP.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
This document summarizes a research paper that analyzes code performance and system simulation for high-speed communication links. It classifies error mechanisms in these systems into three regimes: large-noise, worst-case dominant, and large-set dominant. It develops new methods for simulating coded high-speed links tailored to each regime. These include a more accurate framework for classifying codes, efficient algorithms for computing error probabilities, and guidelines for biasing simulation parameters to capture behavior at low error rates.
IRJET-An Effective Strategy for Defense & Medical Pictures Security by Singul...IRJET Journal
The document proposes an effective strategy for encrypting medical and defense images using singular value decomposition (SVD) and fractional Fourier transform (FrFT). It discusses generating shares of an image by applying FrFT to the SVD components, specifically the S matrix of singular values. Multiple shares are created using a sharing matrix. The strategy aims to securely transmit images over unreliable networks. Quantitative analysis shows the approach encrypts images quickly, with high sensitivity to changes and resistance to differential attacks, making it effective for encrypting sensitive images.
This document presents a method for detecting Devnagari text in scene images. The method uses two main characteristics of Devnagari text: uniform stroke width and the presence of a headline with vertical strokes below it. Candidate text regions are identified using distance transforms to verify uniform stroke width. A probabilistic Hough transform is then used to detect horizontal lines in each region, which are analyzed to identify headlines indicating Devnagari text. The method was tested on 10,000 images and achieved a precision of 0.7994 and recall of 0.778 for Devnagari text detection, representing an improvement over previous work. Some limitations are noted and future work is proposed to address them through machine learning approaches.
IRJET- An Improved 2LSB Steganography Technique for Data TransmissionIRJET Journal
This document proposes an improved 2LSB steganography technique for hiding data in digital images. The technique embeds message bits randomly into the least significant bit planes of pixels in an RGB image. It uses the two least significant bits of the red channel to indicate whether even or odd parity bits of the message will be embedded in the green and blue channels. The random embedding of bits and parity checks makes the hidden message difficult to detect. Experimental results show the technique can hide message bits in 65-74% of pixels while maintaining good image quality with PSNR values over 30dB. The technique aims to provide higher data hiding capacity and security compared to standard 2LSB steganography.
An Efficient Algorithm to Calculate The Connectivity of Hyper-Rings Distribut...ijitcs
The aim of this paper is develop a software module to test the connectivity of various odd-sized HRs and attempted to answer an open question whether the node connectivity of an odd-sized HR is equal to its degree. We attempted to answer this question by explicitly testing the node connectivity's of various oddsized HRs. In this paper, we also study the properties, constructions, and connectivity of hyper-rings. We usually use a graph to represent the architecture of an interconnection network, where nodes represent processors and edges represent communication links between pairs of processors. Although the number of edges in a hyper-ring is roughly twice that of a hypercube with the same number of nodes, the diameter and the connectivity of the hyper-ring are shorter and larger, respectively, than those of the corresponding hypercube. These properties are advantageous to hyper-ring as desirable interconnection networks. This paper discusses the reliability in hyper-ring. One of the major goals in network design is to find the best way to increase the system’s reliability. The reliability of a distributed system depends on the reliabilities of its communication links and computer elements
IRJET- Study of Topological Analogies of Perfect Difference Network and Compl...IRJET Journal
This document analyzes the topological properties of complete graphs and perfect difference networks (PDNs). It shows that a complete graph can be derived from a system of perfect difference sets (PDSs) by taking the union of the PDN and the graph of missing links. Specifically:
1) The total number of missing links in a complete graph derived from a PDS system of order n is (n2 - n)/2.
2) The total number of links in a complete graph is equal to the total number of links in the corresponding PDN plus the missing links.
3) A complete graph can be constructed by combining the PDN derived from a PDS with the graph of the missing links
Centrality Prediction in Mobile Social NetworksIJERA Editor
By analyzing evolving centrality roles using time dependent graphs, researchers may predict future centrality values. This may prove invaluable in designing efficient routing and energy saving strategies and have profound implications on evolving social behavior in dynamic social networks. In this paper, we propose a new method to predict centrality values of nodes in a dynamic environment. The proposed method is based on calculating the correlation between current and past measure of centrality for each corresponding node, which is used to form a composite vector to represent the given state of centralities. The performance of the proposed method is evaluated through simulated predictions on data sets from real mobile networks. Results indicate significantly low prediction error rate occurs, with a suitable implementation of the proposed method.
IRJET- Text based Deep Learning for Stock PredictionIRJET Journal
This document presents a proposed system for text-based deep learning stock prediction and interpretation. It discusses using a neural network model to extract relevant predictive factors from news texts and financial tweets that influence stock prices. An interactive visualization interface is explored to effectively communicate the interpreted model predictions to users. The system aims to help with stock market investment and analysis tasks. It evaluates the approach on two case studies predicting stock prices from online news and tweets, finding the proposed neural network architecture outperforms other models.
An enhanced difference pair mapping steganography method to improve embedding...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
For performing distributed data mining two approaches are possible: First, data from several sources are copied to a data warehouse and mining algorithms are applied in it. Secondly,
mining can performed at the local sites and the results can be aggregated. When the number of
features is high, a lot of bandwidth is consumed in transferring datasets to a centralized location. For this dimensionality reduction can be done at the local sites. In dimensionality reduction a certain encoding is applied on data so as to obtain its compressed form. The
reduced features thus obtained at the local sites are aggregated and data mining algorithms are applied on them. There are several methods of performing dimensionality reduction. Two most important ones are Discrete Wavelet Transforms (DWT) and Principal Component Analysis (PCA). Here a detailed study is done on how PCA could be useful in reducing data flow across a distributed network.
In this project, we use leverage of centrality models for extracting the importance
of network graph in some determined topologies. The aim is to have scrutinizing
and analyzing the centralities in different network topologies. Three type of centrality
that are used in this project are Betweenness, Closeness and eigenvector
one. Moreover, we have show the results of this comparison in the experimental
results. Besides, we have extend the results of our experimental works for real
world problems. The Results of this part are grasped with visualization plots for
some centralities measurements clearly.
Advanced compression technique for random dataijitjournal
This paper proposes a technique to compress any data irrespective of it’s type. Compressing random data especially has always proved to be a difficult task. With very less patterns and logic within the data it quickly reaches a point where no more data can be represented within a given number of bits. The proposed technique will allow us to compress irrespective of it’s pattern or logic within, and represent it. Further it will permit the technique to be again applied to the already compressed data without having any change in the possible compression ratio. While data can’t be compressed further than a limit the technique will rely on representing the data as a position in any computationally easy number series that extends to infinity provided it have high enough deviation among it’s digits. Only few markers that are linked to the position is saved, rather than representing the original data. The system then use these markers to guess the position and derive the data from it. The procedure is however, computationally intensive and as of now can raise questions of data corruption but with more computing power, efficient algorithm and proper data integrity checks it will be able to provide very high compression ratios in the future
Simulated annealing for location area planning in cellular networksgraphhoc
LA planning in cellular network is useful for minimizing location management cost in GSM network. In fact, size of LA can be optimized to create a balance between the LA update rate and expected paging rate within LA. To get optimal result for LA planning in cellular network simulated annealing algorithm is used. Simulated annealing give optimal results in acceptable run-time
Exploring optimizations for dynamic PageRank algorithm based on GPU : V4Subhajit Sahu
This is my comprehensive viva report version 4.
While doing research work under Prof. Dip Banerjee, Prof. Kishore Kothapalli.
Graph is a generic data structure and is a superset of lists, and trees. Binary search on sorted lists can be interpreted as a balanced binary tree search. Database tables can be thought of as indexed lists, and table joins represent relations between columns. This can be modeled as graphs instead. Assignment of registers to variables (by compiler), and assignment of available channels to a radio transmitter and also graph problems. Finding shortest path between two points, and sorting web pages in order of importance are also graphs problems. Neural networks are graphs too. Interaction between messenger molecules in the body, and interaction between people on social media, also modeled as graphs.
IRJET- Error Reduction in Data Prediction using Least Square Regression MethodIRJET Journal
This document proposes a modification to the least squares regression method to reduce errors in data prediction. It divides the original data set into three parts, uses the first part to make predictions with least squares regression and fits those predictions to the second part of the data to minimize errors. It then validates the model on the third part of data and compares errors to the original least squares method. The proposed method shows reduced errors in prediction based on mean absolute error, mean relative error and root mean square error metrics in most test ranges of the validation data.
This document proposes an efficient approach for processing subgraph matching queries with set similarity (SMS2 queries) in large graph databases. The approach uses a "filter-and-refine" framework with offline indexing and online query processing. In the filtering phase, it builds an inverted lattice index of frequent element set patterns and encodes vertices as signatures. It then applies set similarity and structure-based pruning techniques. In the refinement phase, it uses a dominating set-based subgraph matching algorithm to find matching subgraphs guided by a dominating set selection method. Experimental results show the proposed approach outperforms state-of-the-art methods by an order of magnitude.
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
Query processing strategies in distributed databaseShreerajKhatiwada
This document discusses query processing strategies in distributed databases. It describes three steps to query processing: parsing and translation, optimizing the query, and evaluating the query. For distributed databases, query processing has four stages: query decomposition, data localization, global optimization, and local optimization. Distributed query optimization aims to find efficient execution strategies by considering access methods, join criteria, and transmission costs. Different optimization algorithms are used depending on whether minimizing response time or total time is the goal.
Big 5 ASEAN capital markets forecasting using WEMA methodTELKOMNIKA JOURNAL
ASEAN through ASEAN Economics Community (AEC) 2020 treaty has proposed financial integration via capital markets integration in order to aim comprehensive ASEAN economic integration. Therefore, the need to have a proper prediction of ASEAN capital market becomes a major issue. In this study, we took big 5 ASEAN capital markets, i.e. Straits Times Index (STI), Kuala Lumpur Stock Exchange (KLSE), Stock Exchange of Thailand (SET), Jakarta Stock Exchange (JKSE), and Philippine Stock Exchange (PSE) to be forecasted using WEMA method. Weighted Exponential Moving Average (WEMA) is a new hybrid moving average method which combines the weighting factor calculation in Weighted Moving Average (WMA) with the procedure of Exponential Moving Average (EMA). WEMA has successfully been implemented and used to forecaste discrete time series data, but never being used to forecast ASEAN capital markets. In this study, we took further action by implementing the WEMA method with brute force approach for scaling factor tuning on big 5 ASEAN capital markets. From the experimental results, we found that WEMA has successfully forecasted all those exchanges. By looking at the forecast error measurement, it gives the best performance on PSE and worst performance on SET dataset among all datasets being considered in this study.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
An Optimized Parallel Algorithm for Longest Common Subsequence Using Openmp –...IRJET Journal
This document summarizes research on developing parallel algorithms to optimize solving the longest common subsequence (LCS) problem. LCS is commonly used for sequence comparison in bioinformatics. Traditional sequential dynamic programming algorithms have complexity of O(mn) for sequences of lengths m and n. The document reviews parallel algorithms developed using tools like OpenMP and GPUs like CUDA to reduce computation time. It proposes the authors' own optimized parallel algorithm for multi-core CPUs using OpenMP.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
Parallel random projection using R high performance computing for planted mot...TELKOMNIKA JOURNAL
Motif discovery in DNA sequences is one of the most important issues in bioinformatics. Thus, algorithms for dealing with the problem accurately and quickly have always been the goal of research in bioinformatics. Therefore, this study is intended to modify the random projection algorithm to be implemented on R high performance computing (i.e., the R package pbdMPI). Some steps are needed to achieve this objective, ie preprocessing data, splitting data according to number of batches, modifying and implementing random projection in the pbdMPI package, and then aggregating the results. To validate the proposed approach, some experiments have been conducted. Several benchmarking data were used in this study by sensitivity analysis on number of cores and batches. Experimental results show that computational cost can be reduced, which is that the computation cost of 6 cores is faster around 34 times compared with the standalone mode. Thus, the proposed approach can be used for motif discovery effectively and efficiently.
Regression, theil’s and mlp forecasting models of stock indexiaemedu
This document compares different forecasting models for daily stock prices: linear regression, Theil's incomplete method, and multilayer perceptron (MLP). Principal component analysis was used to reduce the input variables to a single component. Linear regression and Theil's method had similar error rates that were lower than MLP based on MAE, MAPE, and SMAPE. The linear regression and Theil's method models had R-squared values near 1, indicating close fit to the data. Overall, the linear and Theil's models provided more accurate short-term forecasts of daily stock prices than the MLP based on error and fit metrics.
Regression, theil’s and mlp forecasting models of stock indexiaemedu
This document compares three models for forecasting daily stock prices: linear regression, Theil's incomplete method, and multilayer perceptron (MLP). Principal component analysis was used to reduce four input variables (high, low, open prices) into one principal component, which was then used to predict closing prices. Linear regression and Theil's method produced similar results, with slightly lower error than MLP based on MAE, MAPE, and SMAPE. The linear regression and Theil's models had a near perfect R-squared of 0.9977, while MLP was 0.9974. Overall, the simple linear and Theil's models performed best at forecasting closing prices based on this single stock index data.
Regression, theil’s and mlp forecasting models of stock indexIAEME Publication
This document compares different forecasting models for daily stock prices: linear regression, Theil's incomplete method, and multilayer perceptron (MLP). Principal component analysis was used to reduce 4 stock price variables to 1 principal component, which was then used to predict closing prices. Linear regression and Theil's method produced similar results, with MAE around 110 and R-squared over 0.99. MLP had slightly higher error at 118 MAE. Overall, linear regression and Theil's method provided the best forecasts of closing stock prices based on this analysis of models and error metrics.
Quality Prediction in Fingerprint CompressionIJTET Journal
A new algorithm for fingerprint compression based on sparse representation is introduced. At first, dictionary is constructed by sparse combination of set of fingerprint patches. Designing dictionaries can be done by either selecting one from a prespecified set or adapting a dictionary to a set of training signals. In this paper, we use K-SVD algorithm to construct dictionary. After computation of dictionary, the image gets quantized, filtered and encoded. The resultant image obtained may be of three qualities: Good, Bad and Ugly (GBU problem). In this paper, we overcome the GBU problem by prediction the quality of image.
The paper presents a nature inspired algorithm that copies the big bang theory of evolution.
This algorithm is simple with regard to number of parameters. Embedded systems are powered by
batteries and enhancing the operating time of the battery by reducing the power consumption is vital.
Embedded systems consume power while accessing the memory during their operation. An efficient
method for power management is proposed in this work. The proposed method, reduce the energy
consumption in memories from 76% up to 98% as compared to other methods reported in the
literature.
The document presents a nature-inspired Big Bang-Big Crunch (BB-BC) algorithm for minimizing power consumption in embedded systems. The BB-BC algorithm mimics the big bang theory of evolution with an initial random solution generation phase followed by an iterative convergence to the center of mass. The algorithm is shown to reduce energy consumption in system memory from 76-98% compared to Tabu Search, with better accuracy and convergence. It provides an effective method for power management in memory-intensive embedded systems.
A Comparative study of K-SVD and WSQ Algorithms in Fingerprint Compression Te...IRJET Journal
This document compares the K-SVD and WSQ algorithms for fingerprint compression. It provides an overview of both algorithms, including how they work, their advantages, and disadvantages. It also presents results of compressing different sized fingerprint images using each algorithm, showing that K-SVD consistently achieved smaller file sizes than WSQ. The document concludes that K-SVD is superior to WSQ for compressing fingerprint images.
Hardware/software co-design for a parallel three-dimensional bresenham’s algo...IJECEIAES
Line plotting is the one of the basic operations in the scan conversion. Bresenham’s line drawing algorithm is an efficient and high popular algorithm utilized for this purpose. This algorithm starts from one end-point of the line to the other end-point by calculating one point at each step. As a result, the calculation time for all the points depends on the length of the line thereby the number of the total points presented. In this paper, we developed an approach to speed up the Bresenham algorithm by partitioning each line into number of segments, find the points belong to those segments and drawing them simultaneously to formulate the main line. As a result, the higher number of segments generated, the faster the points are calculated. By employing 32 cores in the Field Programmable Gate Array, a line of length 992 points is formulated in 0.31μs only. The complete system is implemented using Zybo board that contains the Xilinx Zynq-7000 chip (Z-7010).
The document discusses modifications to the PC algorithm for constraint-based causal structure learning that remove its order-dependence, which can lead to highly variable results in high-dimensional settings; the modified algorithms are order-independent while maintaining consistency under the same conditions, and simulations and analysis of yeast gene expression data show they improve performance over the original PC algorithm in high-dimensional settings.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
This document discusses using clustering algorithms to construct ontologies from text documents. It begins with an introduction to semantic search, ontologies in the semantic web, and clustering. It then describes the ROCK clustering algorithm in detail. The main tasks to perform are preprocessing text documents, normalizing term weights, applying latent semantic indexing via singular value decomposition, and using the ROCK clustering algorithm. The goal is to group similar documents into clusters to help construct an ontology from the unstructured text data.
This document describes a finite element analysis project involving the development of a finite element code. It summarizes the course content, describes the coding process, presents results from analyzing a plate with a circular hole using different mesh densities, and compares the accuracy of stress predictions across the meshes. Key results include the strategic mesh achieving similar stress prediction accuracy to the densest mesh while using only 1/4 as many elements. The project improved the author's coding and finite element analysis skills.
This document summarizes a research paper that proposes a dynamic approach to improving the k-means clustering algorithm. The proposed approach aims to address two weaknesses of the standard k-means algorithm: its requirement of prior knowledge of the number of clusters k, and its sensitivity to initialization. The approach determines initial cluster centroids by segmenting the data space and selecting high-frequency segments. It then uses the silhouette validity index to dynamically determine the optimal number of clusters k, rather than requiring the user to specify k. The approach is compared to the standard k-means algorithm and other modified approaches, and is shown to improve initial center selection and reduce computation time.
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Similar to Instruction level parallelism using ppm branch prediction (20)
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.