Abstract: The main objective of this dissertation is to optimize the roughness index for a roller burnishing process. The factors have been identified from literature review. These barriers may be of market, cultural, human resource.management, financial, economical, attitudinal, environmental, geographical and technological type. Technology transfer barriers threats the movement of physical structure, knowledge, skills, organization value and capital from the developed to developing countries. Clear understanding of these barriers may help the practitioners to find out various ways to deal with them. This may further facilitate successful implementation of technology transfer. Technology transfer may be said to be successful if Transferor (seller) and the transferee (buyer) can effectively utilize the technology for business gain. The transfer involves cost and expenditure that should be agreed by the transferee and transferor. The process is affected by various factors that hinder Technology Transfer. These factors named Barriers. In the present work, Interpretive Structure Modeling (ISM) is used for the analysis and comparison of various factors important for Technology Transfer. The important parameters are identified and self-interaction matrixes proposed with the help of Interpretive Structure Modeling which evaluates the inhibiting power of these parameters. This index can be used in comparison of different factors responsible for Roller Burnishing Processes.
The document discusses seven new quality tools - affinity diagrams, interrelationship diagrams, tree diagrams, matrix diagrams, matrix data analysis, process decision program charts, and arrow diagrams. It provides descriptions of each tool, including how they are used and basic steps to apply them. Examples are given for affinity diagrams, interrelationship diagrams, tree diagrams, matrix diagrams, and process decision program charts to illustrate how each tool can be implemented.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
This document summarizes an article that describes software engineering for interior dual optimization methods and their applications in electronics superconductivity. The key points are:
1) The software implements interior and dual interior optimization algorithms for solving nonlinear systems of equations, with a focus on programming methods for computational 3D visualization.
2) Applications include optimizing BCS equations for critical temperature prediction in type 1 superconductors platinum and tin, based on atomic mass.
3) Computational results using the software show acceptable errors in optimizing the BCS equations for platinum and tin. Graphical optimizations aid in visualizing optimal parameter values.
Some Reviews on Circularity Evaluation using Non- Linear Optimization TechniquesIRJET Journal
This document provides a review of various nonlinear optimization techniques that can be used to evaluate circularity based on measured coordinate data. It discusses techniques such as minimum zone circle, least squares circle, maximum inscribed circle, and minimum circumscribed circle for data fitting to evaluate circularity error. It also reviews optimization algorithms like genetic algorithms and particle swarm optimization that can be applied to determine the optimal circle fit to minimize circularity error. The document concludes that genetic algorithms and hybrid optimization techniques that combine multiple algorithms can provide efficient evaluations of circularity compared to other discussed methods.
A short introduction presentation about the Basics of Finite Element Analysis. This presentation mainly represents the applications of FEA in the real time world.
IRJET- An Efficient Reverse Converter for the Three Non-Coprime Moduli Set {4...IRJET Journal
This paper proposes a new and efficient reverse converter for converting residue numbers to decimal numbers for the three moduli set {6, 10, 15} which shares the common factor of 5. The proposed converter replaces larger multipliers used in previous converters with smaller multipliers and adders, reducing the hardware requirements. The hardware implementation of the proposed converter is presented and compared to other state-of-the-art converters, showing it performs better with fewer adders and multipliers. The proposed converter efficiently implements reverse conversion for the non-coprime three moduli set while requiring less hardware than previous approaches.
Optimum penetration depth of cantilever sheet pile walls in dry granular soil Ahmed Ebid
in Cantilevered sheet
pile walls are commonly used in shoring systems of deep excavation down to about 5.00 m. The most common design procedure for this type
of flexible retaining structures is to determine the required penetration depth for stability and then increasing the calculated penetration
depth by 20% to 40% to achieve a factor of safety of about 1.5 to 2.0. This procedure has two disadvantages; first, the procedure does not
give accurate values for penetration depth or corresponding factor of safety, second, it ignores the effect of uncertainty in the used
geotechnical parameters. The first aim of this study is to overcome those two disadvantages by introduce an alternative formula to
determine the optimum penetration depth of cantilever sheet pile walls in dry granular soil based on reliability analysis concept, while, the
second aim is to study the impact of using the optimum depth on the cost of the shoring system. The study results assure the validity of
provision of increasing the calculated penetration depth by (20% to 40%) and introduced a formula to calculate the required penetration
depth to achieve probability of failure of 0.1% and proved that using this optimum depth can reduce the direct cost of the shoring system by
5% to 10% based on internal friction angle of soil.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
The document discusses seven new quality tools - affinity diagrams, interrelationship diagrams, tree diagrams, matrix diagrams, matrix data analysis, process decision program charts, and arrow diagrams. It provides descriptions of each tool, including how they are used and basic steps to apply them. Examples are given for affinity diagrams, interrelationship diagrams, tree diagrams, matrix diagrams, and process decision program charts to illustrate how each tool can be implemented.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
This document summarizes an article that describes software engineering for interior dual optimization methods and their applications in electronics superconductivity. The key points are:
1) The software implements interior and dual interior optimization algorithms for solving nonlinear systems of equations, with a focus on programming methods for computational 3D visualization.
2) Applications include optimizing BCS equations for critical temperature prediction in type 1 superconductors platinum and tin, based on atomic mass.
3) Computational results using the software show acceptable errors in optimizing the BCS equations for platinum and tin. Graphical optimizations aid in visualizing optimal parameter values.
Some Reviews on Circularity Evaluation using Non- Linear Optimization TechniquesIRJET Journal
This document provides a review of various nonlinear optimization techniques that can be used to evaluate circularity based on measured coordinate data. It discusses techniques such as minimum zone circle, least squares circle, maximum inscribed circle, and minimum circumscribed circle for data fitting to evaluate circularity error. It also reviews optimization algorithms like genetic algorithms and particle swarm optimization that can be applied to determine the optimal circle fit to minimize circularity error. The document concludes that genetic algorithms and hybrid optimization techniques that combine multiple algorithms can provide efficient evaluations of circularity compared to other discussed methods.
A short introduction presentation about the Basics of Finite Element Analysis. This presentation mainly represents the applications of FEA in the real time world.
IRJET- An Efficient Reverse Converter for the Three Non-Coprime Moduli Set {4...IRJET Journal
This paper proposes a new and efficient reverse converter for converting residue numbers to decimal numbers for the three moduli set {6, 10, 15} which shares the common factor of 5. The proposed converter replaces larger multipliers used in previous converters with smaller multipliers and adders, reducing the hardware requirements. The hardware implementation of the proposed converter is presented and compared to other state-of-the-art converters, showing it performs better with fewer adders and multipliers. The proposed converter efficiently implements reverse conversion for the non-coprime three moduli set while requiring less hardware than previous approaches.
Optimum penetration depth of cantilever sheet pile walls in dry granular soil Ahmed Ebid
in Cantilevered sheet
pile walls are commonly used in shoring systems of deep excavation down to about 5.00 m. The most common design procedure for this type
of flexible retaining structures is to determine the required penetration depth for stability and then increasing the calculated penetration
depth by 20% to 40% to achieve a factor of safety of about 1.5 to 2.0. This procedure has two disadvantages; first, the procedure does not
give accurate values for penetration depth or corresponding factor of safety, second, it ignores the effect of uncertainty in the used
geotechnical parameters. The first aim of this study is to overcome those two disadvantages by introduce an alternative formula to
determine the optimum penetration depth of cantilever sheet pile walls in dry granular soil based on reliability analysis concept, while, the
second aim is to study the impact of using the optimum depth on the cost of the shoring system. The study results assure the validity of
provision of increasing the calculated penetration depth by (20% to 40%) and introduced a formula to calculate the required penetration
depth to achieve probability of failure of 0.1% and proved that using this optimum depth can reduce the direct cost of the shoring system by
5% to 10% based on internal friction angle of soil.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
The document provides information about a course on Total Quality Management taught by M. Senthil Kumar. It includes details of topics covered each week such as the seven traditional tools of quality including check sheets, flow charts, histograms etc. It also outlines the new management tools discussed like affinity diagrams, matrix diagrams and prioritization matrices. Methods of applying quality tools in industries like Toyota, Honda and IBM are presented. The document concludes with sections on benchmarking, failure mode and effect analysis.
Imprecise computing is an attractive model for digital processing at nano metric scales. Inexact computing is particularly interesting for computer arithmetic designs. This work deals about the design and analysis of two new inaccurate 4-2 compressors for utilization in a multiplier. These designs rely on different features of compression, such that imprecision in computation is measured by the error rate and the so-called normalized error distance can meet with respect to circuit-based figures of merit of a design in terms of number of transistors, delay and power consumption. The proposed approximate compressors are proposed and analyzed in Dadda multiplier. Extensive simulation results are provided and an application of the approximate multipliers to image processing is presented. The results proposed designs shows that reduced power dissipation, delay and transistor count.
Finite Element Analysis (FEA) is a numerical method for solving complex engineering problems. The document discusses conducting FEA on a fixed-free cantilever beam to study the effect of mesh density on solution accuracy. Analytical solutions are derived and used to validate FEA results. A beam model is created in ABAQUS with varying element sizes. As element count increases, FEA results converge towards analytical solutions, though with increased computation time. An element count of 4125 provided an optimal balance between accuracy and cost.
Analysis of GF (2m) Multiplication Algorithm: Classic Method v/s Karatsuba-Of...rahulmonikasharma
In recent years, finite field multiplication in GF(2m) has been widely used in various applications such as error correcting codes and cryptography. One of the motivations for fast and area efficient hardware solution for implementing the arithmetic operation of binary multiplication , in finite field GF (2m), comes from the fact, that they are the most time-consuming and frequently called operations in cryptography and other applications. So, the optimization of their hardware design is critical for overall performance of a system. Since a finite field multiplier is a crucial unit for overall performance of cryptographic systems, novel multiplier architectures, whose performances can be chosen freely, is necessary. In this paper, two Galois field multiplication algorithms (used in cryptography applications) are considered to analyze their performance with respect to parameters viz. area, power, delay, and the consequent Area×Time (AT) and Power×Delay characteristics. The objective of the analysis is to find out the most efficient GF(2m) multiplier algorithm among those considered.
This document discusses the assignment problem and provides an overview of the Hungarian algorithm for solving assignment problems. It begins by defining the assignment problem and describing it as a special case of the transportation problem. It then provides details on the Hungarian algorithm, including the key theorems and steps involved. An example problem of assigning salespeople to cities is presented and solved using the Hungarian algorithm to find the optimal assignment with minimum total cost. The document concludes that the Hungarian algorithm provides an efficient solution for minimizing assignment problems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document provides an introduction to the finite element method (FEM). It explains that FEM is a numerical method used to approximate solutions to partial differential equations. It works by dividing a complex problem into smaller, simpler parts called finite elements. This allows for the problem to be solved computationally. The document outlines the basic steps of FEM, including preprocessing (modeling, meshing), solving, and postprocessing (analyzing results). It also discusses applications, history, software, element types, meshing, convergence, and compatibility conditions of FEM.
Index numbers are used to measure changes in economic variables over time or space. Common index numbers include the Laspeyres, Paasche, Fisher, and Törnqvist indexes. These indexes can be used to create price indexes that measure inflation, as well as quantity indexes that measure changes in output and inputs. Direct and indirect approaches can be used to calculate quantity indexes. Properties like transitivity, self-duality, and mean value are important in selecting an appropriate index number formula. Chain indexes that compare consecutive periods are commonly used for productivity measurement.
This document proposes an efficient approach for processing subgraph matching queries with set similarity (SMS2 queries) in large graph databases. The approach uses a "filter-and-refine" framework with offline indexing and online query processing. In the filtering phase, it builds an inverted lattice index of frequent element set patterns and encodes vertices as signatures. It then applies set similarity and structure-based pruning techniques. In the refinement phase, it uses a dominating set-based subgraph matching algorithm to find matching subgraphs guided by a dominating set selection method. Experimental results show the proposed approach outperforms state-of-the-art methods by an order of magnitude.
This document discusses stochastic frontier analysis and techniques for estimating production frontiers and cost frontiers using cross-sectional and panel data. It covers distance functions, cost frontiers, decomposing cost efficiency into technical and allocative components, measuring scale efficiency, and accounting for the production environment in panel data models.
This document provides an overview and outline of topics to be covered in an economics course on production and productivity measurement. It includes informal definitions of key concepts such as production frontier, total factor productivity, technical efficiency and scale economies. It also summarizes the main methods that will be covered, including least squares regression, total factor productivity indices, data envelopment analysis, and stochastic frontiers. Finally, it outlines the chapters that will be included in the course, covering production economics, measurement concepts, index number methods, data and measurement issues, and an introduction to data envelopment analysis.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document provides an overview of the history and basics of finite element analysis (FEA). It discusses how FEA was first developed in 1943 and expanded in the following decades. The basics section describes common FEA applications, basic steps which include converting differential equations to algebraic equations, element types, boundary conditions including loads and constraints, and pre-processing, solving, and post-processing steps. Key element types are also summarized.
Plant location selection by using MCDM methodsIJERA Editor
Plant location selection has a critical impact on the performance of manufacturing companies. The cost associated with acquiring the land and facility construction makes the location selection a long-term investment decision. The preeminent location is that which results in higher economic benefits through increased productivity and good distribution network. Both potential qualitative and quantitative criteria’s are to be considered for selecting the proper plant location from a given set of alternatives. Consequently, from the literature survey, it is found that the Multi criteria decision-making (MCDM) is found to be an effective approach to solve the location selection problems. In the present research, an integrated decision-making methodology is designed which employs the two well-known decision making techniques, namely Analytical hierarchy process (AHP), and Preference ranking organization method for enrichment evaluations (PROMETHEE-II) in order to make the best use of information available, either implicitly or explicitly. It is analyze the structure for the solution of plant location problems and to obtain weights of the selected criteria’s. PROMETHEE-II is employed to solve decision-making problems with multiple conflicting criteria and alternatives.
Structural sizing and shape optimisation of a load celleSAT Journals
This document describes a structural optimization of an S-type load cell design using a multi-objective optimization technique that incorporates reliability analysis. A reliability-related multi-factor optimization model is formulated to minimize stress, displacement, mass, and maximize reliability under a single loading case. Finite element analysis is used to model an initial S-type load cell design and evaluate structural responses. Parameter and case performance indices are developed to quantitatively evaluate and compare different load cell designs. The optimization aims to improve the overall performance index by varying design variables like width, height, and thickness to satisfy multiple objectives.
Optimization of Time Restriction in Construction Project Management Using Lin...IJERA Editor
This study is an attempt to identify the minimum time of a construction project using the critical path method
and linear programming model. A systematic analysis is attempted by developing a work breakdown structure
for entire project to establish work elements for quantifying various resources against time and cost. A network
is established taking into consideration all the predecessor and successor activities. The network is then
optimized through crashing of activities so as to obtain optimal solution and serves as a base for optimizing total
project cost. Finally, linear programming model is used to formulate the system of crashing network for
minimum time by LINGO model and Microsoft Excel. These models consider many considerations of project
thus reducing the duration of project. Ultimately, comparison of both the software outputs and the manual
calculations is done and the best verifier is determined.
Power system state estimation using teaching learning-based optimization algo...TELKOMNIKA JOURNAL
The main goal of this paper is to formulate power system state estimation (SE) problem as a constrained nonlinear programming problem with various constraints and boundary limits on the state variables. SE forms the heart of entire real time control of any power system. In real time environment, the state estimator consists of various modules like observability analysis, network topology processing, SE and bad data processing. The SE problem formulated in this work is solved using teaching leaning-based optimization (TLBO) technique. Difference between the proposed TLBO and the conventional optimization algorithms is that TLBO gives global optimum solution for the present problem. To show the suitability of TLBO for solving SE problem, IEEE 14 bus test system has been selected in this work. The results obtained with TLBO are also compared with conventional weighted least square (WLS) technique and evolutionary based particle swarm optimization (PSO) technique.
IRJET- Data Dimension Reduction for Clustering Semantic Documents using S...IRJET Journal
This document proposes a method called DDR SVD-FCM for clustering semantic XML documents. It uses singular value decomposition (SVD) for data dimension reduction to reduce computing time and memory requirements when clustering XML documents. After projecting XML documents to a lower dimensional space using DDR, it applies fuzzy c-means clustering (FCM) to group the documents. The method considers both the semantic and structural information of XML documents in the clustering process. Experimental results show the properties of the proposed DDR methods for XML document clustering.
IRJET- Optimization of Fink and Howe TrussesIRJET Journal
This document describes research on optimizing the weight of different truss configurations, including double fink, triple fink, modified fink, double Howe, and triple Howe trusses. The optimization problem aims to minimize weight by treating cross-sectional areas as design variables, while satisfying stress, buckling, and deflection constraints. An improved sequential linear programming technique is used to solve the optimization problem. The process involves developing a C program for load calculation, using MATLAB for truss analysis, and applying an optimizer based on improved SLP to determine optimized cross-sectional areas. A parametric study is then carried out by varying span, height, and spacing to identify the most economical truss configuration under the given conditions.
Analysis of Impact of Graph Theory in Computer ApplicationIRJET Journal
This document discusses several applications of graph theory in computer science. It summarizes how graph theory is used in map coloring, mobile phone networks, computer network security, modeling ad-hoc networks, fault tolerant computing systems, and clustering web documents. Graph theory provides structural models that can represent problems in these domains and enable new algorithms and solutions. Key applications mentioned include using graph coloring for frequency assignment in mobile networks, modeling network topology for worm propagation analysis, and representing documents and their relationships as graphs for clustering. Overall, the document outlines how graph theoretical concepts and methodologies are widely utilized to solve problems in computer science research areas.
IRJET- Review on Scheduling using Dependency Structure MatrixIRJET Journal
This document provides a review of scheduling methods using Dependency Structure Matrices (DSM). It begins with an introduction to DSMs, including how they are represented as binary or numerical matrices. It then discusses the four main types of DSM models (product, organization, process, and parameter) and focuses on process DSM models. The remainder of the document summarizes several research papers that proposed innovations for project scheduling using process DSM methods, including addressing task dependencies, iterations, and information flows.
The document provides information about a course on Total Quality Management taught by M. Senthil Kumar. It includes details of topics covered each week such as the seven traditional tools of quality including check sheets, flow charts, histograms etc. It also outlines the new management tools discussed like affinity diagrams, matrix diagrams and prioritization matrices. Methods of applying quality tools in industries like Toyota, Honda and IBM are presented. The document concludes with sections on benchmarking, failure mode and effect analysis.
Imprecise computing is an attractive model for digital processing at nano metric scales. Inexact computing is particularly interesting for computer arithmetic designs. This work deals about the design and analysis of two new inaccurate 4-2 compressors for utilization in a multiplier. These designs rely on different features of compression, such that imprecision in computation is measured by the error rate and the so-called normalized error distance can meet with respect to circuit-based figures of merit of a design in terms of number of transistors, delay and power consumption. The proposed approximate compressors are proposed and analyzed in Dadda multiplier. Extensive simulation results are provided and an application of the approximate multipliers to image processing is presented. The results proposed designs shows that reduced power dissipation, delay and transistor count.
Finite Element Analysis (FEA) is a numerical method for solving complex engineering problems. The document discusses conducting FEA on a fixed-free cantilever beam to study the effect of mesh density on solution accuracy. Analytical solutions are derived and used to validate FEA results. A beam model is created in ABAQUS with varying element sizes. As element count increases, FEA results converge towards analytical solutions, though with increased computation time. An element count of 4125 provided an optimal balance between accuracy and cost.
Analysis of GF (2m) Multiplication Algorithm: Classic Method v/s Karatsuba-Of...rahulmonikasharma
In recent years, finite field multiplication in GF(2m) has been widely used in various applications such as error correcting codes and cryptography. One of the motivations for fast and area efficient hardware solution for implementing the arithmetic operation of binary multiplication , in finite field GF (2m), comes from the fact, that they are the most time-consuming and frequently called operations in cryptography and other applications. So, the optimization of their hardware design is critical for overall performance of a system. Since a finite field multiplier is a crucial unit for overall performance of cryptographic systems, novel multiplier architectures, whose performances can be chosen freely, is necessary. In this paper, two Galois field multiplication algorithms (used in cryptography applications) are considered to analyze their performance with respect to parameters viz. area, power, delay, and the consequent Area×Time (AT) and Power×Delay characteristics. The objective of the analysis is to find out the most efficient GF(2m) multiplier algorithm among those considered.
This document discusses the assignment problem and provides an overview of the Hungarian algorithm for solving assignment problems. It begins by defining the assignment problem and describing it as a special case of the transportation problem. It then provides details on the Hungarian algorithm, including the key theorems and steps involved. An example problem of assigning salespeople to cities is presented and solved using the Hungarian algorithm to find the optimal assignment with minimum total cost. The document concludes that the Hungarian algorithm provides an efficient solution for minimizing assignment problems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document provides an introduction to the finite element method (FEM). It explains that FEM is a numerical method used to approximate solutions to partial differential equations. It works by dividing a complex problem into smaller, simpler parts called finite elements. This allows for the problem to be solved computationally. The document outlines the basic steps of FEM, including preprocessing (modeling, meshing), solving, and postprocessing (analyzing results). It also discusses applications, history, software, element types, meshing, convergence, and compatibility conditions of FEM.
Index numbers are used to measure changes in economic variables over time or space. Common index numbers include the Laspeyres, Paasche, Fisher, and Törnqvist indexes. These indexes can be used to create price indexes that measure inflation, as well as quantity indexes that measure changes in output and inputs. Direct and indirect approaches can be used to calculate quantity indexes. Properties like transitivity, self-duality, and mean value are important in selecting an appropriate index number formula. Chain indexes that compare consecutive periods are commonly used for productivity measurement.
This document proposes an efficient approach for processing subgraph matching queries with set similarity (SMS2 queries) in large graph databases. The approach uses a "filter-and-refine" framework with offline indexing and online query processing. In the filtering phase, it builds an inverted lattice index of frequent element set patterns and encodes vertices as signatures. It then applies set similarity and structure-based pruning techniques. In the refinement phase, it uses a dominating set-based subgraph matching algorithm to find matching subgraphs guided by a dominating set selection method. Experimental results show the proposed approach outperforms state-of-the-art methods by an order of magnitude.
This document discusses stochastic frontier analysis and techniques for estimating production frontiers and cost frontiers using cross-sectional and panel data. It covers distance functions, cost frontiers, decomposing cost efficiency into technical and allocative components, measuring scale efficiency, and accounting for the production environment in panel data models.
This document provides an overview and outline of topics to be covered in an economics course on production and productivity measurement. It includes informal definitions of key concepts such as production frontier, total factor productivity, technical efficiency and scale economies. It also summarizes the main methods that will be covered, including least squares regression, total factor productivity indices, data envelopment analysis, and stochastic frontiers. Finally, it outlines the chapters that will be included in the course, covering production economics, measurement concepts, index number methods, data and measurement issues, and an introduction to data envelopment analysis.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document provides an overview of the history and basics of finite element analysis (FEA). It discusses how FEA was first developed in 1943 and expanded in the following decades. The basics section describes common FEA applications, basic steps which include converting differential equations to algebraic equations, element types, boundary conditions including loads and constraints, and pre-processing, solving, and post-processing steps. Key element types are also summarized.
Plant location selection by using MCDM methodsIJERA Editor
Plant location selection has a critical impact on the performance of manufacturing companies. The cost associated with acquiring the land and facility construction makes the location selection a long-term investment decision. The preeminent location is that which results in higher economic benefits through increased productivity and good distribution network. Both potential qualitative and quantitative criteria’s are to be considered for selecting the proper plant location from a given set of alternatives. Consequently, from the literature survey, it is found that the Multi criteria decision-making (MCDM) is found to be an effective approach to solve the location selection problems. In the present research, an integrated decision-making methodology is designed which employs the two well-known decision making techniques, namely Analytical hierarchy process (AHP), and Preference ranking organization method for enrichment evaluations (PROMETHEE-II) in order to make the best use of information available, either implicitly or explicitly. It is analyze the structure for the solution of plant location problems and to obtain weights of the selected criteria’s. PROMETHEE-II is employed to solve decision-making problems with multiple conflicting criteria and alternatives.
Structural sizing and shape optimisation of a load celleSAT Journals
This document describes a structural optimization of an S-type load cell design using a multi-objective optimization technique that incorporates reliability analysis. A reliability-related multi-factor optimization model is formulated to minimize stress, displacement, mass, and maximize reliability under a single loading case. Finite element analysis is used to model an initial S-type load cell design and evaluate structural responses. Parameter and case performance indices are developed to quantitatively evaluate and compare different load cell designs. The optimization aims to improve the overall performance index by varying design variables like width, height, and thickness to satisfy multiple objectives.
Optimization of Time Restriction in Construction Project Management Using Lin...IJERA Editor
This study is an attempt to identify the minimum time of a construction project using the critical path method
and linear programming model. A systematic analysis is attempted by developing a work breakdown structure
for entire project to establish work elements for quantifying various resources against time and cost. A network
is established taking into consideration all the predecessor and successor activities. The network is then
optimized through crashing of activities so as to obtain optimal solution and serves as a base for optimizing total
project cost. Finally, linear programming model is used to formulate the system of crashing network for
minimum time by LINGO model and Microsoft Excel. These models consider many considerations of project
thus reducing the duration of project. Ultimately, comparison of both the software outputs and the manual
calculations is done and the best verifier is determined.
Power system state estimation using teaching learning-based optimization algo...TELKOMNIKA JOURNAL
The main goal of this paper is to formulate power system state estimation (SE) problem as a constrained nonlinear programming problem with various constraints and boundary limits on the state variables. SE forms the heart of entire real time control of any power system. In real time environment, the state estimator consists of various modules like observability analysis, network topology processing, SE and bad data processing. The SE problem formulated in this work is solved using teaching leaning-based optimization (TLBO) technique. Difference between the proposed TLBO and the conventional optimization algorithms is that TLBO gives global optimum solution for the present problem. To show the suitability of TLBO for solving SE problem, IEEE 14 bus test system has been selected in this work. The results obtained with TLBO are also compared with conventional weighted least square (WLS) technique and evolutionary based particle swarm optimization (PSO) technique.
IRJET- Data Dimension Reduction for Clustering Semantic Documents using S...IRJET Journal
This document proposes a method called DDR SVD-FCM for clustering semantic XML documents. It uses singular value decomposition (SVD) for data dimension reduction to reduce computing time and memory requirements when clustering XML documents. After projecting XML documents to a lower dimensional space using DDR, it applies fuzzy c-means clustering (FCM) to group the documents. The method considers both the semantic and structural information of XML documents in the clustering process. Experimental results show the properties of the proposed DDR methods for XML document clustering.
IRJET- Optimization of Fink and Howe TrussesIRJET Journal
This document describes research on optimizing the weight of different truss configurations, including double fink, triple fink, modified fink, double Howe, and triple Howe trusses. The optimization problem aims to minimize weight by treating cross-sectional areas as design variables, while satisfying stress, buckling, and deflection constraints. An improved sequential linear programming technique is used to solve the optimization problem. The process involves developing a C program for load calculation, using MATLAB for truss analysis, and applying an optimizer based on improved SLP to determine optimized cross-sectional areas. A parametric study is then carried out by varying span, height, and spacing to identify the most economical truss configuration under the given conditions.
Analysis of Impact of Graph Theory in Computer ApplicationIRJET Journal
This document discusses several applications of graph theory in computer science. It summarizes how graph theory is used in map coloring, mobile phone networks, computer network security, modeling ad-hoc networks, fault tolerant computing systems, and clustering web documents. Graph theory provides structural models that can represent problems in these domains and enable new algorithms and solutions. Key applications mentioned include using graph coloring for frequency assignment in mobile networks, modeling network topology for worm propagation analysis, and representing documents and their relationships as graphs for clustering. Overall, the document outlines how graph theoretical concepts and methodologies are widely utilized to solve problems in computer science research areas.
IRJET- Review on Scheduling using Dependency Structure MatrixIRJET Journal
This document provides a review of scheduling methods using Dependency Structure Matrices (DSM). It begins with an introduction to DSMs, including how they are represented as binary or numerical matrices. It then discusses the four main types of DSM models (product, organization, process, and parameter) and focuses on process DSM models. The remainder of the document summarizes several research papers that proposed innovations for project scheduling using process DSM methods, including addressing task dependencies, iterations, and information flows.
IRJET- Optimum Design of Fan, Queen and Pratt TrussesIRJET Journal
The document discusses the optimum design of various truss configurations including fan, queen, and Pratt trusses. The author formulates the design of these trusses as an optimization problem to minimize weight. The problem considers stress, buckling, and deflection constraints with cross-sectional area as the design variable. A sequential linear programming technique is used to solve the optimization problem. Parametric studies are performed to understand the effect of truss geometry including span, purlin spacing, truss spacing, and height on selecting the best design. The results of optimizing double fan, triple fan, modified queen, double Pratt, and triple Pratt trusses are presented and discussed.
Role of Simulation in Deep Drawn Cylindrical PartIJSRD
Simulation is widely used in forming industry due to its speed and lower cost and it has been proven to be effective in prediction of formability and spring back behavior. The purpose of finite element simulation in the sheet metal forming process is to minimize the time and cost in the design phase by predicting key outcomes such as the final shape of the part, the possibility of various defects and the flow of material. Such simulation is most useful and efficient when it is performed in the early stage of design by designers, rather than by analysis specialists after the detailed design is complete. The accuracy of such simulation depends on knowledge of material properties, boundary conditions and processing parameters. In the industry today, numerical sheet metal forming simulation is very important tool for reducing load time and improving part quality. In this paper finite element model for the deep-drawing of cylindrical cups is constructed and the simulation results are obtained by using different simulation parameters, i.e. punch velocity, coefficient of friction and blank holder force of the FE mesh-elements and these results are compared with experimental work.
Mathematical Analysis of Half Volume DRA with Performance Evaluation for High...rahulmonikasharma
A novel efficient design analysis has been proposed for a reduced-size dielectric resonator antenna (DRA). This has been done by using half-volume resonator with short-circuited plane of symmetry. The design serves as a new configuration of planar antenna, thus making integration of active devices easier. The antenna shows remarkable performance, having low cross-polarization levels and reasonably well radiation patterns. The antenna design has been simulated using a new analytical framework created in MATLAB giving accurate measurement details.
This document presents a finite element analysis of a stepped bar subjected to axial loading using MATLAB and ANSYS. It begins with an introduction to finite element analysis and describes how MATLAB and ANSYS can be used to model and analyze engineering problems. The document then outlines the specific procedure used to analyze a stepped bar, including defining the problem, developing the analytical solution, and determining displacements and stresses at nodes. The results obtained from MATLAB and finite element analysis are shown to be similar, while ANSYS results are also close. The document concludes the analysis methods allow solving problems efficiently and with less error compared to manual calculations.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Effect and Optimization of Laser Beam Machining Parameters using Taguc...IRJET Journal
This document reviews research on optimizing parameters for laser beam machining (LBM) using Taguchi and grey relational analysis methods. It summarizes several past studies that aimed to optimize LBM quality characteristics like kerf width, surface roughness, heat affected zone, etc. by adjusting parameters like laser power, cutting speed, gas pressure, focal position. The document outlines the Taguchi method approach of using orthogonal arrays and signal-to-noise ratios to identify optimal parameter settings. It also describes grey relational analysis and its steps of preprocessing data, calculating grey relational coefficients and grades, and analyzing results to select optimal parameters. Tables show examples of optimized cutting speeds for different materials and thicknesses from previous works.
Effect and optimization of machining parameters on cutting force and surface ...eSAT Journals
Abstract Productivity and the quality of the machined parts are the main challenges of metal cutting industry during turning process. Therefore cutting parameters must be chosen and optimize in such a way that the required surface quality can be controlled. Hence statistical design of experiments (DOE) and statistical/mathematical model are used extensively for optimize. The present investigation was carried out for effect of cutting parameters (cutting speed, depth of cut and feed) In turning off mild steel and aluminum to achieve better surface finish and to reduce power requirement by reducing the cutting forces involved in machining. The experimental layout was designed based on the 2^k factorial techniques and analysis of variance (ANOVA) was performed to identify the effect of cutting parameters on surface finish and cutting forces are developed by using multiple regression analysis. The coefficients were calculated by using regression analysis and the model is constructed. The model is tested for its adequacy by using 95% confidence level. By using the mathematical model the main and interaction effects of various process parameters on turning was studied. Keywords: Universal Lathe, Surface Finish, Cutting Force, High Speed Steel Tool, Factorial Technique.
IRJET- Advanced Control Strategies for Mold Level ProcessIRJET Journal
This document discusses advanced control strategies for mold level process in continuous casting of steel. It proposes using a fuzzy controller to help control mold level during abnormal conditions like nozzle clogging/unclogging, which can be difficult for a proportional-integral-derivative controller to handle alone. The fuzzy controller design is based on operator expertise. Image processing techniques are also used to monitor mold level in real-time, including image segmentation using fuzzy clustering and classification using support vector machines. Simulation and online implementation results indicate the effectiveness of these advanced control methods in maintaining stable mold level during transient disturbances in the continuous casting process.
Final Year Project Report. (Management of Smart Electricity Grids)Jatin Pherwani
The report of my progress with the final year Design Project in one half of the semester. Design process and research findings with a few crude concepts.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
MULTIOBJECTIVE OPTIMIZATION AND QUANTITATIVE TRADE-OFF ANALYSIS IN XEROGRAPHI...Sudhendu Rai
A complex engineering system such as a xerographic marking engine is an aggregate of interacting subsystems that are coupled through a large number of constraints and design variables. The traditional way of designing these systems is to decouple the overall design into smaller subsystems and assign teams to work on these subsystems. This approach is critical to making the project manageable and enabling concurrent development. However, if the goal is to design systems that can deliver best possible performance, i.e. if the performance limits are being pushed to the extreme, characterizing the interactions becomes critical.
Multiobjective optimization is a design methodology that addresses the issue of designing large systems where the goal is to simultaneously optimize a finite number of performance criteria that come from one or more disciplines and are coupled through a set of design variables and constraints. This approach to design makes explicit and quantitative the inherent trade-offs that need to be made in doing coupled system design. It also enables the determination of the attainable limits of performance from a given system.
This paper will discuss the multiobjective optimization methodology and optimal methods of performing quantitative trade-off analysis. These design methods will be applied to problems from the xerographic design domain and results will be presented.
Conceptual Fixture Design Method Based On Petri NetIJRES Journal
Fixture conceptual design is based on the analysis of the required functions and requirements,
on the basis of establishing the function of the fixture structure model, then carries on the function
decomposition and allocation, to generate the design constraints of fixture layout and structure, finally it is
concluded that reasonable fixture function structure diagram output. Fixture conceptual design is the whole
fixture design automatically decide the key stage of technical and economic benefits in the process. This paper
based on Petri net model and simulation method study, and put it into the fixture conceptual design. With a
brace of fixture design methods as an example, at the same time, using the Petri net .model simulation software
CPNtools on the technical process model and simulation of fixture design, Eventually effective method prove
that Petri net has worked well in the fixture conceptual design.
This document discusses stiffness analysis of flexible parallel manipulators. It presents the forward kinematics analysis of a 3-RRR planar parallel manipulator using genetic algorithms and neural networks to minimize positional errors. The Jacobian and platform stiffness matrices are evaluated within the defined workspace. A pseudo-rigid body model with frame and plate elements is used to analyze the flexible link configuration and compliant joints. Stiffness indices are obtained and validated using commercial finite element software. The methodology is illustrated for a 3-RRR flexible parallel manipulator to study the effects of link flexibility and joint compliance on stiffness.
This document discusses the digital circuit layout problem and approaches to solving it using graph partitioning techniques. It begins by introducing the digital circuit layout problem and how it has become more complex with increasing circuit sizes. It then discusses how the problem can be decomposed into subproblems using graph partitioning to assign geometric coordinates to circuit components. The document reviews several traditional approaches to solve the problem, such as the Kernighan-Lin algorithm, and discusses their limitations for larger circuit sizes. It also discusses more recent approaches using evolutionary algorithms and concludes by analyzing the contributions of various approaches.
This document summarizes research on using graph partitioning techniques to solve digital circuit layout problems. It discusses how the digital circuit layout problem is a constrained optimization problem that is NP-hard. It then reviews previous work on using techniques like min-cut bipartitioning, multi-way partitioning algorithms, and spectral graph partitioning to solve the problem. The document concludes by analyzing evolutionary approaches that have been used, including genetic algorithms, memetic algorithms, ant colony optimization, and particle swarm intelligence. It finds that these approaches are dependent on representation and initialization but can produce quality solutions for small circuits.
IRJET- Kinematic Analysis of Planar and Spatial Mechanisms using MatpackIRJET Journal
This document discusses kinematic analysis of planar and spatial mechanisms using computational methods in MATLAB. It develops a MATLAB package called MATPACK for numerical analysis of planar and spatial mechanisms. It uses vector notation to analyze planar mechanisms and Denavit-Hartenberg parameters to analyze spatial mechanisms. Results for velocities and accelerations are obtained from MATPACK and compared to theoretical results. The objective is to introduce existing notation and methods to analyze spatial mechanisms using computational tools like MATPACK, AutoCAD, and CATIA.
An Analysis of Energy Efficient Gaussian Filter ArchitecturesIRJET Journal
This document reviews and compares different energy efficient Gaussian filter architectures. It summarizes:
1) Existing architectures approximate kernel coefficients, change window size, modify architecture, or use approximate arithmetic to reduce computation and improve energy efficiency.
2) Fixed-point and digital approximated kernels reduce implementation complexity by approximating coefficients. Low complexity filters exploit neighboring pixel similarity. Approximate filters use approximate adders.
3) Reconfigurable designs like ES-GSF achieve quality-energy tradeoffs by computing pixel values using only significant kernel boundaries based on quality and energy constraints.
4) Simulations show approximated kernels reduce quality metrics like PSNR compared to accurate filters but provide improved energy efficiency, while reconfigurable
Similar to Optimization of Surface Roughness Index for a Roller Burnishing Process Using Graph Theory and Matrix Method (20)
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Optimization of Surface Roughness Index for a Roller Burnishing Process Using Graph Theory and Matrix Method
1. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 225
Paper Publications
Optimization of Surface Roughness Index for a
Roller Burnishing Process Using Graph Theory
and Matrix Method
1
Mr. Yudhvir Singh, 2
Mr. Neeraj
Abstract: The main objective of this dissertation is to optimize the roughness index for a roller burnishing process.
The factors have been identified from literature review. These barriers may be of market, cultural, human
resource .management, financial, economical, attitudinal, environmental, geographical and technological type.
Technology transfer barriers threats the movement of physical structure, knowledge, skills, organization value and
capital from the developed to developing countries. Clear understanding of these barriers may help the
practitioners to find out various ways to deal with them. This may further facilitate successful implementation of
technology transfer. Technology transfer may be said to be successful if Transferor (seller) and the transferee
(buyer) can effectively utilize the technology for business gain. The transfer involves cost and expenditure that
should be agreed by the transferee and transferor. The process is affected by various factors that hinder
Technology Transfer. These factors named Barriers. In the present work, Interpretive Structure Modeling(ISM) is
used for the analysis and comparison of various factors important for Technology Transfer. The important
parameters are identified and self-interaction matrixes proposed with the help of Interpretive Structure Modeling
which evaluates the inhibiting power of these parameters. This index can be used in comparison of different
factors responsible for Roller Burnishing Processes.
Keywords: Roller Burnishing Processes, important parameters, Interpretive Structure Modeling(ISM).
1. INTRODUCTION
Roller burnishing process:
Roller Burnishing (RB) is one of the no-chip finishing processes for surface engineering. Roller Burnishing is a cold
working, surface treatment process in which plastic deformation of surface irregularities occurs by exerting pressure
through a very hard and smooth roller on a surface to generate a uniform and work-hardened surface. The burnishing
process is an attractive finishing technique which can increase the work piece surface strength as well as decreasing its
surface roughness.
2. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 226
Paper Publications
Advantages of Roller burnishing process:
Improved Metallurgical properties- work hardened surface, Increase in fatigue strength.
No Additional Machine Investment- attachable to any Standard Machine tool already present in the shop.
Long tool life, no operator skill required, low torque & power requirements, maximum part interchangeability.
Characteristic Features of the Burnishing Process:
External burnishing can employ either ball or roller but an internal surface requires roller burnishing. Roller burnishing
tools consist of a series of tapered highly polished and hardened rolls positioned in slots within a retaining cage. The tool
is adjustable within work piece to develop a pressure that exceeds the yield point of the work material. Practically any
metal with hardness up to Rc 40 can be easily burnished. Harder materials require still harder roller material viz carbide
[2], ceramic [3] or diamond [4, 5, 6] though ductility and malleability renders the material easy to burnish. Brittle
materials like CI have also been burnished [7] because they can withstand compressive force though the mechanism may
be different in the burnishing of such materials. The effectiveness also is low. Burnishing allowance depends generally on
the size and ductility of work so does 'the final result. A low carbon steel or 40 to 50 mm diameter can be given a
burnishing allowance of 0.01 to 0.02mm and can be expected to produce a finish of 0.1 to 0.2 µm Ra [8, 9].
2. GRAPH THEORETIC APPROACH
GTA is a systematic and logical approach that is applied in various disciplines [10, 11, 12].The conventional
representations like block diagrams, cause and effect diagrams and flowcharts do not depict interactions among factors
and are not suitable for further analysis and cannot be processed or expressed in mathematical form. The following
features highlight the uniqueness of this approach over the other similar approaches:
It is a single numerical index for all the parameters.
It is a systematic methodology for conversion of qualitative factors to quantitative values, and mathematic modelling
gives an edge to the proposed technique over conventional methods.
It permits presents modelling of interdependence of parameters under consideration.
It allows visual analysis and computer processing.
It leads to self-analysis and comparison of different organisations.
Main Elements of GTA:
1. The digraph representation
2. The matrix representation
3. The permanent function representation.
The graph theoretical methodology consists of three steps – digraph representation, matrix representation and permanent
function representation. A digraph is used to represent the structure of the system in terms of nodes and edges wherein the
nodes represent the measure of characteristics and the edges correspond to dependence of characteristics. Matrix
representation is one to one representation of the digraph. Permanent representation is the mathematical expression of
characteristics and their interdependence. Digraph representation, matrix representation and permanent function are
developed for the quality, cost, reliability and efficiency characteristics. The elements of the diagram are the syntax
symbols and the diagram itself is considered to be a sentence which describes the system. A graph is defined by an
ordered pair G= S, c where ‟S‟ is the vertex set and ‟c‟ the edge set and every edge is defined by its two end vertices. If
each edge in the graph has a direction, the graph is known as a directed graph or the Digraph. If the digraph is a network
graph, each edge and the vertex have properties of flow and the potential in the system respectively.
An engineering system is usually represented as a diagram, with nodes, lines, and words or numbers which assign values
to some or all nodes or lines. For instance, it is common to use a diagram to show a truss, or a mechanism, or a gear
system, or electrical circuit, or a mass-spring dashpot oscillator. The elements of the diagram are syntax symbols, and the
diagram itself is considered to be a sentence which describes the system. Following mathematical tradition, the work
should be done in two parts.
3. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 227
Paper Publications
1. Check whether the problem is well defined and hence solvable. In the words of logic, check that the syntax of the
engineering system being dealt with, in other words, its diagrammatic representation, is a Well-Formed Formula (WFF).
This question is usually dealt with in a cursory fashion in engineering work. Attention is immediately focused on the
numerical mathematical formula to be applied to a problem, with no systematic attention paid to whether the system is
correctly defined. For instance, as will be shown later, the rule often used to determine if a truss structure is just-stiff, is
not a complete solution to that question. However, when viewing the truss as a graph, this question has a proven
algorithmic solution.
2. Before proceeding into the analysis effort, ensure that the solution will require as low a computational effort as
possible. This is of less practical importance for small systems, but is very important for large systems with many
components. This subject is usually dealt with by heuristic rules of thumb, known as expert domain knowledge. When
using the graph theory representation of the engineering system, it can often be dealt with using mathematically proven
algorithms of graph theory rather than man- made heuristic rules.
3. MATRIX REPRESENTATION
Digraph presentation is very suitable for visual analysis but is not very suitable for computer processing. Moreover if the
system is large, its corresponding graph is complex and this complicates its understanding visually. In view of this, it is
necessary to develop a representation of the CCPP that can be understood, stored, retrieved and processed by the
computer in an efficient manner. Many matrix representations for example adjacency and incidence matrix available in
literature. The adjacency matrix is a square matrix and is selected for this purpose. It being a matrix and multinomial has
been derived subsequently based on connectivity only, neglecting the directional properties. Matrix for n=6 are shown
below:
B1 0 b13 0 0 0
b21 B2 b23 b24 b25 b26
H= 0 0 B3 b34 b35 b36
0 0 0 B4 0 b46
0 0 0 b54 B5 b56
0 0 b63 0 0 B6
In an undirected graph, no direction is assigned to the edges in the graph. Where as directed graphs or digraphs have
directional edges. An undirected graph has a symmetric adjacency matrix and therefore has real eigenvalues (the multi set
of which is called the graph's spectrum) and a complete set of orthonormal eigenvectors. While the adjacency matrix
depends on the vertex labeling, its spectrum is a graph invariant. Undirected graphs often uses the former convention of
counting loops twice, where as directed graph typically uses the latter convention. There exists a unique adjacency matrix
for each isomorphism class of graphs (up to permuting rows and columns), and it is not the adjacency matrix of any other
isomorphism class of graphs. If the graph is undirected, the adjacency matrix is symmetric. The main diagonal of every
adjacency matrix corresponding to a graph without loops has all zero entries. The relationship between a graph and the
eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory. The adjacency matrix of
undirected simple graph is symmetric and therefore has a complete set of real eigenvalues and an orthogonal eigenvector
basis. The set of eigenvalues of a graph is the spectrum of the graph. If A is the adjacency matrix of the directed or
undirected graph S, then the matrix An (i.e. the matrix product of n copies of A) has an interesting interpretation: the entry
in the row i and column j gives the number of (directed and undirected) walks of length n from vertex i to vertex j. This
implies, for example, that the number of triangles in an undirected graph S is exactly the trace of A3 divided by 6.
4. OBJECTIVE
The objectives of the present research work are as following:
Identification of important parameters affecting the surface roughness in a roller burnishing process.
Analysis of the effect of individual parameter on the surface roughness value.
Identification of the most critical parameter affecting the surface roughness value.
4. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 228
Paper Publications
Comparison of different roller burnishing processes.
5. IDENTIFICATION OF PERTINENT ATTRIBUTES AND THE FACTORS
(i) Feed direction:
Feed direction means the direction in which the tool approaches the workpiece. There are two types of feed direction: (a)
along b) counter.
a) Along:
It implies that the tool is moving in the direction of workpiece.
b) Counter:
It implies that the tool and work piece are moving in the opposite direction.
By the study of literature, it was found that surface roughness will improve more in case of counter feed direction. (ii)
Work piece materials: By the study of literature, we found that the Roller Burnishing process has been applied to different
materials. RB has a positive effect on the surface roughness of O1 alloy steel. The improvement percentage on the surface
quality was 12.5% [4] (iii) Spindle Speed: The spindle speed is the rotational frequency of the spindle of the machine,
measured in revolutions per minute (RPM). The preferred speed is determined by working backward from the desired
surface speed (m/min) and incorporating the diameter (of work piece or tool).Excessive spindle speed will cause
premature tool wear, breakages, and can cause tool chatter, all of which can lead to potentially dangerous conditions.
Using the correct spindle speed for the material and tools will greatly enhance tool life and the quality of the surface
finish. In general, an increase in burnishing speedup to about 100 m/min leads to a decrease in mean roughness. With a
further increase in burnishing speed mean roughness gradually increases. The deterioration of the surface roughness in the
burnishing process at high burnishing speeds is believed to be caused by the chatter that results in instability of the
burnishing tool across the work piece surface. In addition, the low surface roughness can be interpreted by the low
deforming action of the rollers at high speeds and also because the lubricant loses its effect due to the insufficient time for
it to penetrate between the roller and the work piece surfaces. It is better than to select low speeds because the deforming
action of the burnishing tool is greater and metal flow is regular at low speed.
(iv). Feed Rate:
Feed rate is the velocity at which the tool is fed, that is, advanced against the workpiece. It is expressed in units of
distance per revolution for burnishing process. Feed Rate is dependent on the:
Surface finish desired.
Power available at the spindle.
Rigidity of the machine and tooling setup.
Strength of the work piece.
The surface roughness is increased with increasing the feed rate up to certain point (0.06 mm/rev) then it begins to
decrease, it is also observed that higher reduction in surface roughness happened at very low burnishing feed rate [5].
(v). No of passes:
It was found that the number of burnishing tool passes is an important parameter for the surface roughness and hardness
of burnished components. The surface roughness can be increased with increasing the burnishing force or by number of
passes to certain limit. The surfaces starts to deteriorate, as the surface of the metal is over work hardened due to the
plastic deformation caused by the burnishing force or the number of tool passes exceeding the limits
(vi). Coolant:
A coolant is a fluid which flows through a device to prevent its overheating, transferring the heat produced by the device
to other devices that use or dissipate it. An ideal coolant has high thermal capacity, low viscosity, is low- cost, non-toxic,
and chemically inert, neither causing nor promoting corrosion of the cooling system.
5. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 229
Paper Publications
6. METHODOLOGY
The Graph theoretic approach evaluates the permanent qualitative index of Roller burnishing in terms of single numerical
index, which takes into account the individual effect of various surface roughness parameters and their interdependencies
while analysing and evaluating the RBSR. The various steps of the proposed approach, which would be helpful in
evaluation of the RBSR, are enlisted in sequential manner as below;
1. Identify the various parameters of the RB affecting the Surface Roughness of the workpiece.
2. Develop a digraph among the RBSR parameters based on interactions among them.
3. Develop a performance parameter matrix on the basis of digraph developed in step2. This will be of size M × M, with
diagonal elements representing surface roughness parameters and the off-diagonal elements representing interactions
among them.
4. Develop a variable permanent matrix (VPMRB).
5. Find the value of permanent function for the surface roughness parameters of RB.
6. Compare different roller burnishing process in terms of surface roughness parameter permanent function obtained in
step 5.
7. Document the results for future analysis/reference. Based on the methodology discussed above, the organization can
evaluate the extent of parameters present in roller burnishing.
7. RESULTS AND DISCUSSIONS
Graph Theory and Matrix Method is a multi-attribute decision making method (MADM) which is successfully applied
on the roller burnishing process.
By the application of Graph Theory and Matrix Method, we have identified the critical parameter affecting the surface
roughness of roller burnishing process. In the present work, number of passes is the most critical parameter having value
of maximum variable permanent function = 85725.
Graph Theory and Matrix Method is used in the selection of an improved process among various roller burnishing
processes by the comparison of variable permanent functions of the different processes.
The Graph Theory and Matrix Method is used in the arrangement of the parameters according to their relative
importance in the process. The sequence of parameters in our work is number of passes, feed direction, workpiece
material, feed rate, coolant, burnishing speed.
If some improvement is done in the parameters affecting the surface roughness of the process, then we can compare
the improved process with the previous process.
8. FUTURE SCOPE
Mathematical modeling may be done to find out the more inter-relations which would lead to the more accuracy in the
calculation of permanent function value.
The roller burnishing process can be divided into different subgroups such as surface roughness, hardness, stress value
etc. and different values of permanent functions can be evaluated.
Graph theoretic approach can be applied on various machining process.
Complex manufacturing processes can be compared by the application of graph theoretic approach by comparing their
permanent function values
6. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 230
Paper Publications
REFERENCES
[1] N.S.M. El-Tayeb, K.O. Low, P.V. Brevern,” Influence of roller burnishing contact width and burnishing orientation
on surface quality and tribological behaviour of Aluminium 6061” Journal of Materials Processing Technology xxx
(2007).
[2] Lee S.S.G., Loh N.H. “Computer-integrated ball burnishing of a plastic-injection-mould cavity insert”, Journal
Mater Process Technol 1996; 57: 189-194.
[3] Yu, Xingbo; Wang, Lijiang. “Effect of various parameters on the surface roughness of an aluminum alloy burnished
with a spherical surfaced polycrystalline diamond tool”, International Journal Mach Tools Manuf 1999; 39: 459-469.
[4] Neema, M.L., Pandey P.C. “Effect of ball burnishing and controlled peening on properties of turned surfaces” 1979;
321-334.
[5] Lye S.W., Chin T.C., Loh N.H., Tam S.C. “Intelligent feedback control in ball burnishing operations”, International
Journal Adv. Manuf. Technol. (UK) 1992; 7: 119-25.
[6] Morimoto, Tokio. “Burnishing process using lathe effect of tool material on burnishing process”, Mem Fac Eng.
Osaka City Univ. 1988; 29: 7-15.
[7] Morimoto T., Tamamura K. “Burnishing process using a rotating ball-tool. Effect of tool material on the burnishing
process”, 1991; 147: 185-193.
[8] Loh N.H., Tam S.C., Miyazawa S. “Optimization of the surface finish produced by ball burnishing”, Journal Mech
Work Technol 1989; 19: 101-107.
[9] Lee S.S.G, Tam S.C., Loh N.H. “Ball burnishing of 316L stainless steel”, Journal Mater Process Technol 1993; 37:
241-251.
[10] Gandhi OP, Agarwal V.P,” FMEA-0A digraph and matrix approach” , Reliability Engineering and System
safety,(1992) vol. 35,pp.147-158.
[11] Gandhi OP, Agarwal V.P,” failure cause analysis –a structural approach” , Transactions of ASME, (1996)pp. 434-
439.
[12] Rao, R.V. Padmanabhan, K.K, “selection, identification and comparison of industrial robots using digraph and
matrix methods” , Robotics and computer integrated manufacturing(2006) vol.22, No. 4, pp.373-383.
[13] Pascale Ballanda, Laurent Tabourota, Fabien Degrea, Vincent Moreaub, “Mechanics of the Feng burnishing
process”, Precision Engineering xxx (2012).
[14] LeiLi, Wei Xia,ZhaoYaoZhou,JingZhao,ZhengQiangTang, “Analytical prediction and experimental verification of
surface roughness during the burnishing process”, International Journal of Machine Tools & Manufacture 62 (2012)
67–75.
[15] Mayank K Patel, Manish D Patel, “Optimization of input process parameters for vibratory ball burnishing process
using RSM”, IJREAS Volume 2, Issue 2 (February 2012) ISSN: 2249-3905.
[16] Zhengqiang Tang, Wei Xia, Fenglei Li, Zhaoyao Zhou, Jing Zhao, “Application of response surface methodology in
the optimization of burnishing parameters for surface integrity”, International Journal of Machine Tools &
Manufacture 62 (2012) 67–75.
[17] S.M.A. Al-Qawabah, ” Investigation on the Effect of Roller Burnishing Process on the Surface Quality and
Microhardness of Cu-Zn-Al Sma Alloys”, Research Journal of Applied Sciences, Engineering and Technology
4(16): 2682-2694, 2012.
[18] S. Yanga, D. A. Puleo, O. W. Dillon, Jr., I. S. Jawahir, “Surface Layer Modifications in Co-Cr-Mo Biomedical Alloy
from Cryogenic Burnishing”, Procedia Engineering 19 (2011) 383 –388.
7. ISSN 2393-8471
International Journal of Recent Research in Civil and Mechanical Engineering (IJRRCME)
Vol. 2, Issue 1, pp: (225-231), Month: April 2015 – September 2015, Available at: www.paperpublications.org
Page | 231
Paper Publications
[19] Aysun Sagbas,” Analysis and optimization of surface roughness in the ball burnishing process using response
surface methodology and desirabilty function”, Journal Advances in Engineering Software 42 (2011) 992–998.
[20] S. Swirad, “The surface texture analysis after sliding burnishing with cylindrical elements”, Wear 271 (2011) 576–
581.
[21] P. Zhanga, J. Lindemann, W.J. Ding, C. Leyens, “Effect of roller burnishing on fatigue properties of the hot-rolled
Mg–12Gd–3Y magnesium alloy”, Materials Chemistry and Physics 124 (2010) 835–840.
[22] Mieczyslaw Korzynskia, Andrzej Pacana, ” Centreless burnishing and influence of its parameters on machining
effects”, Journal of Materials Processing Technology 210 (2010) 1217–1223.
[23] Ugur Esme, ” Use of grey based tagauchi method in ball burnishing process for the optimization of surface
roughness of a 7075 aluminium alloy”, Materials and technology 44 (2010) 3, 129–135.
[24] C. S. Jawalkar, R. S. Walia, “Study of roller burnishing process on En-8 specimens using design of experiments”,
Journal of Mechanical Engineering Research Vol. 1(1) pp. 038-045, November, 2009.
[25] Dr. Safwan M.A. Al-Qawabah, “Invistigation of Roller Burnishing on Zamac5 Alloyed by Copper”, Journal of
Applied Sciences Research, 5(10): 1796-1801, 2009.
[26] F. Klocke, V. Backer, H. Wegner, A. Timmer, ”Innovative FE-analysis of the roller burnishing process for different
geometries”, X International Conference on Computational Plasticity COMPLAS X E. Oñate and D. R. J. Owen
(Eds) CIMNE, Barcelona, 2009.
[27] T. A. El-Taweel, M. H. El-Axir, “Analysis and optimization of the ball burnishing process through the Taguchi
technique”, International Journal Adv Manuf Technol (2009) 41:301–310.
[28] Jianying Liu, Zhaojun Yang, ” Effect of Elastic Controlling Parameters on the PCD Burnishing System”,
Proceedings of the 2009 IEEE International Conference on Mechatronics and Automation.
[29] S. Thamizhmnaii, B. Bin Omar, S. Saparudin, S. Hasan, “Surface roughness investigation and hardness by
burnishing on titanium alloy”, Journal of Achievements in Materials and Manufacturing Engineering, Volume 28
Issue 2 June 2008.
[30] Yinggang Tian, Yung C. Shin,” Laser-assisted burnishing of metals”, International Journal of Machine Tools &
Manufacture 47 (2007) 14–22.
[31] N.S.M. El-Tayeb, K.O. Low, P.V. Brevern, ” Influence of roller burnishing contact width and burnishing orientation
on surface quality and tribological behaviour of Aluminium 6061”, Journal of Materials Processing Technology
(2007).