Evolutionary Algorithmical Approach for VLSI Physical Design- Placement ProblemIDES Editor
Physical layout automation is very important in
VLSI’s field. With the advancement of semiconductor
technology, VLSI is coming to VDSM (Very Deep Sub
Micrometer), and the scale of the random logic IC circuits
goes towards million gates. Physical design is the process of
determining the physical location of active devices and
interconnecting them inside the boundary of the VLSI
chip.The earliest and the most critical stage in VLSI layout
design is the placement. The background is the rectangle
packing problem: given a set of rectangular modules of
arbitrary sizes, place them without overlap on a plane within
a rectangle of minimum area [1], [5]. The VLSI placement
problem is to place the object in the fixed area of die without
overlap and with some cost constrain such as the wire length
and area of the die. The wire length and the area optimization
is the major task in the physical design. We first
introduce about the major technique involved in the algorithm
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Evolutionary Algorithmical Approach for VLSI Physical Design- Placement ProblemIDES Editor
Physical layout automation is very important in
VLSI’s field. With the advancement of semiconductor
technology, VLSI is coming to VDSM (Very Deep Sub
Micrometer), and the scale of the random logic IC circuits
goes towards million gates. Physical design is the process of
determining the physical location of active devices and
interconnecting them inside the boundary of the VLSI
chip.The earliest and the most critical stage in VLSI layout
design is the placement. The background is the rectangle
packing problem: given a set of rectangular modules of
arbitrary sizes, place them without overlap on a plane within
a rectangle of minimum area [1], [5]. The VLSI placement
problem is to place the object in the fixed area of die without
overlap and with some cost constrain such as the wire length
and area of the die. The wire length and the area optimization
is the major task in the physical design. We first
introduce about the major technique involved in the algorithm
TEST-COST-SENSITIVE CONVOLUTIONAL NEURAL NETWORKS WITH EXPERT BRANCHESsipij
It has been proven that deeper convolutional neural networks (CNN) can result in better accuracy in many
problems, but this accuracy comes with a high computational cost. Also, input instances have not the same
difficulty. As a solution for accuracy vs. computational cost dilemma, we introduce a new test-cost-sensitive
method for convolutional neural networks. This method trains a CNN with a set of auxiliary outputs and
expert branches in some middle layers of the network. The expert branches decide to use a shallower part
of the network or going deeper to the end, based on the difficulty of input instance. The expert branches
learn to determine: is the current network prediction is wrong and if the given instance passed to deeper
layers of the network it will generate right output; If not, then the expert branches stop the computation
process. The experimental results on standard dataset CIFAR-10 show that the proposed method can train
models with lower test-cost and competitive accuracy in comparison with basic models.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Kashif Mehmood
Short Term Load Forecasting (STLF) can predict load from several minutes to week plays
the vital role to address challenges such as optimal generation, economic scheduling, dispatching and
contingency analysis. This paper uses Multi-Layer Perceptron (MLP) Artificial Neural Network
(ANN) technique to perform STFL but long training time and convergence issues caused by bias,
variance and less generalization ability, unable this algorithm to accurately predict future loads. This
issue can be resolved by various methods of Bootstraps Aggregating (Bagging) (like disjoint
partitions, small bags, replica small bags and disjoint bags) which helps in reducing variance and
increasing generalization ability of ANN. Moreover, it results in reducing error in the learning process
of ANN. Disjoint partition proves to be the most accurate Bagging method and combining outputs of
this method by taking mean improves the overall performance. This method of combining several
predictors known as Ensemble Artificial Neural Network (EANN) outperform the ANN and Bagging
method by further increasing the generalization ability and STLF accuracy.
Enhancing energy efficient dynamic load balanced clustering protocol using Dy...IJTET Journal
Mobile Ad hoc Network (MANET) is a kind of self configuring and self describing wireless ad hoc networks. MANET has characteristics of topology dynamics due to factors such as energy conservation and node movement that leads to dynamic load balanced clustering problem (DLBCP). It is necessary to have an effective clustering algorithm for adapting the topology change. Generally, Clustering is mainly used to reduce the topology size. In this, we used load balance and energy metric in GA to solve the DLBCP. It is important to select the energy efficient cluster head for maintaining the cluster structure and balance the load effectively. Elitism based Immigrants Genetic algorithm (EIGA) and Memory Enhanced Genetic Algorithm (MEGA) are used to solve DLBCP. These schemes will select the optimal cluster head by considering the parameters includes distance and energy. We used EIGA to maintain the diversity level of the population and memory scheme (MEGA) to store the old environments into the memory. It promises the energy efficiency for the entire cluster structure to increase the lifetime of the network. The experimental results show that the proposed schemes increases the network life time and reduces the energy consumption.
An experimental evaluation of similarity-based and embedding-based link predi...IJDKP
The task of inferring missing links or predicting future ones in a graph based on its current structure
is referred to as link prediction. Link prediction methods that are based on pairwise node similarity
are well-established approaches in the literature and show good prediction performance in many realworld graphs though they are heuristic. On the other hand, graph embedding approaches learn lowdimensional representation of nodes in graph and are capable of capturing inherent graph features,
and thus support the subsequent link prediction task in graph. This paper studies a selection of
methods from both categories on several benchmark (homogeneous) graphs with different properties
from various domains. Beyond the intra and inter category comparison of the performances of the
methods, our aim is also to uncover interesting connections between Graph Neural Network(GNN)-
based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
Construction Management (CM) has to deal with a variety of uncertainties related to Time, Cost, Quality, and Safety, to name a few. Such uncertainties make the entire construction process highly unpredictable. It, therefore, falls under the purview of artificial neural networks (ANNs) in which the given hazy information can be effectively interpreted in order to arrive at meaningful conclusions. This paper reviews the application of ANNs in construction activities related to the prediction of costs, risk, and safety, tender bids, as well as labor and equipment productivity. The review suggests that the ANN’s had been highly beneficial in correctly interpreting inadequate input information. It was seen that most of the investigators used the feed forward back propagation type of the network; however, if a single ANN architecture was found to be insufficient, then hybrid modeling in association with other machine learning tools such as genetic programming and support vector machines were much useful. It was however clear that the authenticity of data and experience of the modeler are important in obtaining good results.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
Channel encoding system for transmitting image over wireless network IJECEIAES
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
Short Term Load Forecasting Using Bootstrap Aggregating Based Ensemble Artifi...Kashif Mehmood
Short Term Load Forecasting (STLF) can predict load from several minutes to week plays
the vital role to address challenges such as optimal generation, economic scheduling, dispatching and
contingency analysis. This paper uses Multi-Layer Perceptron (MLP) Artificial Neural Network
(ANN) technique to perform STFL but long training time and convergence issues caused by bias,
variance and less generalization ability, unable this algorithm to accurately predict future loads. This
issue can be resolved by various methods of Bootstraps Aggregating (Bagging) (like disjoint
partitions, small bags, replica small bags and disjoint bags) which helps in reducing variance and
increasing generalization ability of ANN. Moreover, it results in reducing error in the learning process
of ANN. Disjoint partition proves to be the most accurate Bagging method and combining outputs of
this method by taking mean improves the overall performance. This method of combining several
predictors known as Ensemble Artificial Neural Network (EANN) outperform the ANN and Bagging
method by further increasing the generalization ability and STLF accuracy.
Enhancing energy efficient dynamic load balanced clustering protocol using Dy...IJTET Journal
Mobile Ad hoc Network (MANET) is a kind of self configuring and self describing wireless ad hoc networks. MANET has characteristics of topology dynamics due to factors such as energy conservation and node movement that leads to dynamic load balanced clustering problem (DLBCP). It is necessary to have an effective clustering algorithm for adapting the topology change. Generally, Clustering is mainly used to reduce the topology size. In this, we used load balance and energy metric in GA to solve the DLBCP. It is important to select the energy efficient cluster head for maintaining the cluster structure and balance the load effectively. Elitism based Immigrants Genetic algorithm (EIGA) and Memory Enhanced Genetic Algorithm (MEGA) are used to solve DLBCP. These schemes will select the optimal cluster head by considering the parameters includes distance and energy. We used EIGA to maintain the diversity level of the population and memory scheme (MEGA) to store the old environments into the memory. It promises the energy efficiency for the entire cluster structure to increase the lifetime of the network. The experimental results show that the proposed schemes increases the network life time and reduces the energy consumption.
An experimental evaluation of similarity-based and embedding-based link predi...IJDKP
The task of inferring missing links or predicting future ones in a graph based on its current structure
is referred to as link prediction. Link prediction methods that are based on pairwise node similarity
are well-established approaches in the literature and show good prediction performance in many realworld graphs though they are heuristic. On the other hand, graph embedding approaches learn lowdimensional representation of nodes in graph and are capable of capturing inherent graph features,
and thus support the subsequent link prediction task in graph. This paper studies a selection of
methods from both categories on several benchmark (homogeneous) graphs with different properties
from various domains. Beyond the intra and inter category comparison of the performances of the
methods, our aim is also to uncover interesting connections between Graph Neural Network(GNN)-
based methods and heuristic ones as a means to alleviate the black-box well-known limitation.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
Construction Management (CM) has to deal with a variety of uncertainties related to Time, Cost, Quality, and Safety, to name a few. Such uncertainties make the entire construction process highly unpredictable. It, therefore, falls under the purview of artificial neural networks (ANNs) in which the given hazy information can be effectively interpreted in order to arrive at meaningful conclusions. This paper reviews the application of ANNs in construction activities related to the prediction of costs, risk, and safety, tender bids, as well as labor and equipment productivity. The review suggests that the ANN’s had been highly beneficial in correctly interpreting inadequate input information. It was seen that most of the investigators used the feed forward back propagation type of the network; however, if a single ANN architecture was found to be insufficient, then hybrid modeling in association with other machine learning tools such as genetic programming and support vector machines were much useful. It was however clear that the authenticity of data and experience of the modeler are important in obtaining good results.
Review and comparison of tasks scheduling in cloud computingijfcstjournal
Recently, there has been a dramatic increase in the popularity of cloud computing systems that rent
computing resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same
physical infrastructure. It is a virtual pool of resources which are provided to users via Internet. It gives
users virtually unlimited pay-per-use computing resources without the burden of managing the underlying
infrastructure. One of the goals is to use the resources efficiently and gain maximum profit. Scheduling is a
critical problem in Cloud computing, because a cloud provider has to serve many users in Cloud
computing system. So scheduling is the major issue in establishing Cloud computing systems. The
scheduling algorithms should order the jobs in a way where balance between improving the performance
and quality of service and at the same time maintaining the efficiency and fairness among the jobs. This
paper introduces and explores some of the methods provided for in cloud computing has been scheduled.
Finally the waiting time and time to implement some of the proposed algorithm is evaluated
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
This Research paper discusses the study and analysis conducted during this research on various techniques in biometric domain. A close glance on biometric enhancement techniques and their limitations are presented in this research paper. This process would enable researcher to understand the research contributions in the area of DCT and DFT based recognition and security, locate some crucial limitations of these notable research. This paper having summary about the different research papers that applicable to our topic of research which mentioned above. Biometric Recognition and security is a most important subject of research in this area of image processing.
Energy efficiency is one of the most critical issue in design of System on Chip. In Network On
Chip (NoC) based system, energy consumption is influenced dramatically by mapping of
Intellectual Property (IP) which affect the performance of the system. In this paper we test the
antecedently extant proposed algorithms and introduced a new energy proficient algorithm
stand for 3D NoC architecture. In addition a hybrid method has also been implemented using
bioinspired optimization (particle swarm optimization) technique. The proposed algorithm has
been implemented and evaluated on randomly generated benchmark and real life application
such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing algorithm (spiral and crinkle) and has shown better
reduction in the communication energy consumption and shows improvement in the
performance of the system. Comparing our work with spiral and crinkle, experimental result
shows that the average reduction in communication energy consumption is 19% with spiral and
17% with crinkle mapping algorithms, while reduction in communication cost is 24% and 21%
whereas reduction in latency is of 24% and 22% with spiral and crinkle. Optimizing our work
and the existing methods using bio-inspired technique and having the comparison among them
an average energy reduction is found to be of 18% and 24%.
Channel encoding system for transmitting image over wireless network IJECEIAES
Various encoding schemes have been introduced till date focusing on an effective image transmission scheme in presence of error-prone artifacts in wireless communication channel. Review of existing schemes of channel encoding systems infer that they are mostly inclined on compression scheme and less over problems of superior retention of signal retention as they lacks an essential consideration of network states. Therefore, the proposed manuscript introduces a cost effective lossless encoding scheme which ensures resilient transmission of different forms of images. Adopting an analytical research methodology, the modeling has been carried out to ensure that a novel series of encoding operation be performed over an image followed by an effective indexing mechanism. The study outcome confirms that proposed system outshines existing encoding schemes in every respect.
The ROI of Social Media is a presentation given by Dorian Benkoil of TeemingMedia.com at Social Media Weekend at Columbia University's graduate school of journalism.
It delves into how to figure out whether what you're spending, the effort you're putting into social media, is justified, and how to make it work better to achieve your goals -- all of which is something Teeming Media and Dorian do regularly for publishers.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
Data detection method for uplink massive MIMO systems based on the long recu...IJECEIAES
Although the mean square error (MMSE) approach is recognized to be near optimal for uplinking large-scale multiple-input-multiple-output (MIMO) systems, there are certain difficulties in the procedure related to matrix inversion. The long recurrence enlarged conjugate gradient (LRE-CG) approach is proposed in this study as a way to iteratively realize the MMMS algorithm while avoiding the complications of matrix inversion. In addition, a diagonal-approximate starting solution to the LRE-CG approach was used to speed up the conversion rate and reduce the complications required. It has been discovered that the LRE-CG-based approach has the ability to significantly reduce computational complexity. By comparing simulation results, it is clear that this new methodology surpasses well-established ways like the Neumann series approximation-based method and the Gauss-Siedel iterative method. With a small number of iterations, the suggested approach achieves near-optimal performance of a standard MMSE algorithm.
Estimation of Optimized Energy and Latency Constraint for Task Allocation in ...ijcsit
In Network on Chip (NoC) rooted system, energy consumption is affected by task scheduling and allocation
schemes which affect the performance of the system. In this paper we test the pre-existing proposed
algorithms and introduced a new energy skilled algorithm for 3D NoC architecture. An efficient dynamic
and cluster approaches are proposed along with the optimization using bio-inspired algorithm. The
proposed algorithm has been implemented and evaluated on randomly generated benchmark and real life
application such as MMS, Telecom and VOPD. The algorithm has also been tested with the E3S benchmark
and has been compared with the existing mapping algorithm spiral and crinkle and has shown better
reduction in the communication energy consumption and shows improvement in the performance of the
system. On performing experimental analysis of proposed algorithm results shows that average reduction
in energy consumption is 49%, reduction in communication cost is 48% and average latency is 34%.
Cluster based approach is mapped onto NoC using Dynamic Diagonal Mapping (DDMap), Crinkle and
Spiral algorithms and found DDmap provides improved result. On analysis and comparison of mapping of
cluster using DDmap approach the average energy reduction is 14% and 9% with crinkle and spiral.
A Hybrid Differential Evolution Method for the Design of IIR Digital FilterIDES Editor
This paper establishes methodology for the robust
and stable design of infinite impulse response (IIR) digital
filters using hybrid differential evolution method. Differential
Evolution (DE) is undertaken as a global search technique
and exploratory search is exploited as a local search technique.
DE is a population based stochastic real parameter
optimization technique relating to evolutionary computation,
whose simple yet powerful and straight forward features make
it very attractive for numerical optimization. Exploratory
search aims to fine tune the solution locally in promising
search area. This proposed DE method augments the capability
to explore and exploit the search space locally as well globally
to achieve the optimal filter design parameters by applying
the opposition learning strategy and random migration. A
multivariable optimization is employed as the design criterion
to obtain the optimal stable IIR filter that minimizes the
magnitude approximation error and ripple magnitude. DE
method is implemented to design low-pass, high-pass, bandpass,
and band-stop digital IIR filters. The achieved design of
IIR digital filters by applying DE method authenticates that
its results are comparable to other algorithms and can be
effectively applied for higher filter design.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Taking into account how brain tumors and gliomas are notorious forms of cancer, the medical field has found several methods to diagnose these
diseases, with many algorithms that can segment out the cancer cells in the magnetic resonance imaging (MRI) scans of the brain. This paper has proposed a similar segmenting algorithm called a custom administering attention module. This solution uses a custom U-Net model along with a custom administering attention module that uses an attention mechanism to classify and segment the glioma cells using long-range dependency of the
feature maps. The customizations lead to a reduction in code complexity and memory cost. The final model has been tested on the BraTS 2019 dataset and has been compared with other state-of-the-art methods for displaying how much better the proposed model has performed in the category of enhancing,
non-enhancing and peritumoral gliomas.
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
Electrically small antennas: The art of miniaturizationEditor IJARCET
We are living in the technological era, were we preferred to have the portable devices rather than unmovable devices. We are isolating our self rom the wires and we are becoming the habitual of wireless world what makes the device portable? I guess physical dimensions (mechanical) of that particular device, but along with this the electrical dimension is of the device is also of great importance. Reducing the physical dimension of the antenna would result in the small antenna but not electrically small antenna. We have different definition for the electrically small antenna but the one which is most appropriate is, where k is the wave number and is equal to and a is the radius of the imaginary sphere circumscribing the maximum dimension of the antenna. As the present day electronic devices progress to diminish in size, technocrats have become increasingly concentrated on electrically small antenna (ESA) designs to reduce the size of the antenna in the overall electronics system. Researchers in many fields, including RF and Microwave, biomedical technology and national intelligence, can benefit from electrically small antennas as long as the performance of the designed ESA meets the system requirement.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Search and Society: Reimagining Information Access for Radical Futures
Volume 2-issue-6-2200-2204
1. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
2200
Abstract—Digital Circuit Layout is a combinatorial
optimization problem. Due to the complexity of integrated
circuits, the first step in physical design is usually to divide a
design into subdesigns. The work presents a brief survey of
major contributions to solve digital circuit layout problem using
graph partitioning technique by dividing the study into three
parts: Basic inspiration, Graph Partitioning related preliminary
work and role of evolutionary approaches in solving digital
circuit layout problem. The study analyses the work of major
contributors and concludes the findings.
Index Terms—Partitioning, Min-cut, NP hard, Evolutionary
approach
I. INTRODUCTION
The exponential increase in the size of digital circuit,
reduction in chip size and heterogeneity of circuit elements
used in modern chips lead to an increase in the complexity of
modern digital circuit layout and design of algorithms [1].
Due to the complexity of integrated circuits, the first step in
physical design is usually to divide a design into subdesigns.
Considerations include area, logic functionality, and
interconnections between subdesigns. The complexity of the
digital electronic circuit is due to the number of gates used per
system as well as the interconnection of the gates. Reduction
of the total number of gates used and interconnection in the
system would reduce the cost of the design, as well as increase
the efficiency of the overall system. The future growth of
digital circuits depends critically on the research and
development of Circuit Layout automation tools [2].
II. THE PROBLEM IN HAND
The digital circuit layout problem is a constrained
optimization problem in the combinatorial sense. Given a
circuit represented by net list [3], set of modules, its
dimensions, set of pins, the layout problem seeks an
assignment of geometric coordinates of the circuit
components that satisfies the requirements of the fabrication
technology (sufficient wire spacing , restricted number of
wiring layers etc.) and that minimizes certain cost criteria .
Practically, all aspects of the layout problem as a whole are
intractable; that is, they are NP-hard. Consequently, the
alternate is to exploit the heuristic methods to solve very large
problems. One of these methods is to break up the problem
into sub problems, which are then solved one after the other.
Manuscript received June 15, 2013.
Maninder Kaur, SMCA, Thapar University,Patiala-147001.
Kawaljeet Singh, Director, University Computer Centre, Punjabi
University,Patiala-147001
Almost always, these sub problems are NP-hard too, but they
are more amenable to heuristic solutions than is the entire
layout problem itself [4]. Each one of the layout sub problems
is decomposed in an analogous fashion. In this way, the
procedure is repeated to break up the optimization problems
until reaching primitive sub problems. These sub problems
are not decomposed further, but rather solved directly, either
optimally if an efficient polynomial-time optimization
algorithm exists or approximately if the sub problem is itself
NP-hard or intractable
Circuit layout is an important part of the digital circuit design
process .The input to the circuit layout design cycle is a circuit
diagram and the output is the layout of the circuit [5]. This is
accomplished in several stages such as partitioning,
floorplanning, placement and routing.
III. DESIGN COMPLEXITY IN DIGITAL CIRCUIT
LAYOUT
Optimal graph bipartitioning with the edge cut objective is an
NP complete problem. Similarly, optimal hypergraph
bipartitioning and multi-way partitioning are NP complete
problems, which suggest that polynomial time algorithms for
these are also unlikely to exist. Some practical consequences
of this fact are that the number of problem solutions grows
exponentially with the problem size and that there may be
multiple optimal solutions .The reality that problem sizes are
prohibitively large imposes fairly strict criteria on possible
approximation algorithms: the run time complexity should be
linear or log linear with the size of the problem. At any level
of partitioning, the input to the partitioning algorithm is a set
of components and a net list. The output is a set of subcircuits
which after connected, function as the original circuit and
terminals required for each sub circuit to connect it to the
other subcircuits. Other than maintaining the original
functionality, the partitioning process optimizes certain
parameters subject to certain constraints. The objective
functions for a partitioning problem include the minimization
of the number of nets that cross the boundaries of partition,
and the minimization of the maximum number of times a path
crosses the partition boundaries [6]. The constraints for the
partitioning problem include area constraints and terminal
constraints. The constraints and the objective functions used
in the partitioning problem vary depending upon the design
style and the partitioning level used. The actual objective
function and constraints chosen for the partitioning problem
may also depend on the specific problem.
IV. LITERATURE SURVEY
The general graph partitioning problem is NP-complete,
approximate methods constitute a natural and useful approach
to address this problem. In the past several decades, this
Solving Digital Circuit Layout Problem based on
Graph Partitioning Technique: A Glance
Maninder Kaur1
, Kawaljeet Singh2
2. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
2201
www.ijarcet.org
problem inspired a great number of methods and heuristics
such as greedy algorithms, spectral methods, multilevel
approaches, as well as algorithms based on well-known
metaheuristics like tabu search, ant colony, simulated
annealing, genetic and memetic algorithms. Iterative
improvement methods produce high quality partitions, but
excessive computation time is required to do so. On the other
hand, constructive methods yield not as high quality partitions
as iterative improvement methods, yet good, in a much shorter
time. Ideally, both quality and computational efficiency of the
solution are crucial for a practical partition method. Quality of
solution is important for performance of the circuit and
computational efficiency is essential for curtailing the design
procedure, especially for large circuits where weeks, months
or even years may be required to realize these circuits. The
fact that future partition and routing tasks will be much more
complicated due to the increasing size of the circuits and the
growing design objectives implies that faster partition and
routing tools should be developed to handle such immense
complexity.
Some metaheuristics have already been used to partition
graphs, like genetic algorithms [49, 50] or ant-like agents [7].
Therefore, for all of these tools, we are looking for a near
optimal partition in reasonable time. Because minimum cut
algorithms were well studied [8, 9, 52], most partitioning
methods use recursive bisections. But these methods often
provide a partition which is far from optimal [53], regarding
the minimization of the sum of the weight of edge cuts.
Conversely, spectral graph partitioning methods [54, 55] and
multilevel partitioning algorithms [56] produce good
partitions. The most widely used heuristic like Kernighan
–Lin algorithm [8] and Fiduccia-Mattheyses algorithm [9] are
not very suitable for solving real VLSI partitioning
problems-their performance is strongly dependent on the
starting point and they are unable to handle any additional
constraints in an efficient way. In such a case evolutionary
algorithms (EAs) seem to be a promising alternative as they
have already turned out to be powerful tools for solving hard
combinatorial problems (eg. [7]).A few EA based approaches
to VLSI circuit partitioning have been described in the
literature [10, 54].
A. Analysis of various approaches
The literature review has been classified into three major
categories
The basic inspiration
graph partitioning related preliminary work
evolutionary approaches for digital circuit layout
The work initiated with basic inspiration from B. W.
Kernighan and S. Lin[8] in the early 1970s .The authors
proposed KL heuristic for graph bipartitioning. D. G.
Schweikert and B. W. Kernighan, in 1972[11] extended the
work for hypergraph model in the Kernighan–Lin partitioning
heuristic. For both of these approaches the complexity of the
algorithms was too high even for moderate size problems. The
performance of the Kernighan–Lin algorithm largely depends
on the quality of the bisection that it starts with. The
algorithms produce poor partitions for larger hypergraphs.
L.Hagen,et al. [12] proposed FM algorithm in which vertices
removed and inserted from the bucket list using a
last-in-first-out (LIFO) scheme .B. Krishnamurty,[13]
extended FM algorithm using look-ahead scheme for circuit
bipartitioning. FM algorithm was able to provide satisfactory
solutions only for smaller to medium size problems .The
approach produced poor partitions for larger hypergraphs and
the results depend on the quality of the initial partitions that it
starts with.
In graph related preliminary work, Sanchis in year 1989[14]
extended the FM concept to deal with multiway partitioning
producing better quality than KL but at the expense of
increased runtime.. Johnson et al., [15] used simulated
annealing for graph partitioning producing smaller netcuts
than iterative methods, albeit with much greater runtimes.
S.W. Hadley and B.L. Mark[16] generated initial partitions
based on eigen vector decomposition. The approach required
transformation of every multi-terminal net into two terminal
nets which could result in a loss of information needed for a
performance based partitioning. Bultan and Aykanat [17]
used meanfield annealing for multi way partitioning algorithm
at the cost of greater runtimes. Cong.J, W. labio and N.
Sivakumar,(1994) proposed k-way net based multi way
partitioning algorithm, producing better quality solutions than
the FM algorithm but only for smaller size problems[18].
Yang and Wong,[19] used maximum flow problem which has
is no constraint on the sizes of the resulting subsets.Vipin
Kumar, et.al.(1999) used multilevel clustering approach
(hMetis)[63].The approach provided poor flexibility and
objective functions for clustering were difficult to formulate.
The approach was less efficient with larger size integrated
circuits. Jong-Sheng Cherng and Sao-Jie Chen,[20] work was
based on the Multilevel flat partitioning .In the year 2002,
Drechsler[21] used recursive partitioning. The increasing
recursion depth lead to investment of more run time.
Mardhana[22] used neural network to solve the partitioning
problem. The results depend on moves generated by a neural
network.
In 1987, Ackley started with the evolutionary approach for
solving digital circuit layout problem based on graph
partitioning technique [51]. The author used GA for min cut
bisection problem. Saab and Rao [23] proposed simulated
evolution bisection heuristic. Chatterjee and Hartley [24]
proposed a simulated annealing based heuristic which
performed partitioning and placement simultaneously. The
heuristic has no crossover operator. Chandrasekharam et
al.[25] proposed a stochastic search by a genetic algorithm
(GA).Areibi and Vannelli [26] combined tabu search and
genetic algorithm for hypergraph partitioning problem. Alpert
et al. [27] integrated the Metis into a genetic algorithm for
graph partitioning. Langham and Grant used Ant Foraging
Strategy (AFS) for graph partitioning [58]. Merz and
Freisleben[28] used a memetic algorithm for graph
bipartitioning problem. Kim and Moon [29] proposed a
hybrid genetic algorithm for multiway graph partitioning
.Cincotti et al. proposed an order-based encoding to evaluate
a partitioning of vertices represented by this encoding, taking
a long time for the decoding process[59]. Muhlenbein and
Mahnig (2002) presented a theory of population based
optimization methods using approximations of search
3. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
2202
distributions. Kohmoto et al. (2003) incorporated a simple
local search algorithm into the GA[64] .Sait.S.M et.al.
proposed memetic algorithm based on Genetic Algorithm (GA)
and Tabu search [60]. Kim et al. (2004) proposed a combination
of a genetic algorithm with an FM-based heuristic for
hypergraph bipartitioning. Kucukpetek et al. [30] presented a
genetic algorithm for the coarsening phase of a multilevel
scheme for graph partitioning. Ganesh et al. [31] presented a
swarm intelligence based approach to the circuit-partitioning
problem Martin [32] proposed GAs technique as the singular
value decomposition (SVD), This spectral technique has high
running time and is suitable where the fitness function is
expensive to compute Sun and Leng [33]) presented an
effective multi-level algorithm based on simulated annealing
for bisecting graph. Moraglio et al. [34] provided a new
geometric crossover for graph partitioning based on a
labelling-independent distance that filters out the redundancy
of the encoding. Coe et al. [35] investigated the
implementation of a Memetic algorithm for VLSI circuit
partitioning by exploiting parallelism and pipelining .Datta et
al. [36] proposed a multi-objective evolutionary algorithm
(MOEA) for solving the graph partitioning problem. Leng et
al. [37] proposed an effective multi-level algorithm for
bisecting graph based on ant colony optimization (ACO).
Farshbaf and Derakhshi (2009) proposed a multi-objective
GA method to optimize the graph partitioning [61].
Armstrong et al. [38] presented six different parallel Memetic
Algorithms for solving the circuit partitioning
problem..Subbaraj et al. [39], presented an efficient hybrid
Genetic Algorithm (GA) incorporating the Taguchi method as
a local search mechanism to solve both bipartitioning and
recursive partitioning. Peng et al. [40]A multi-objective
discrete PSO (DPSO) algorithm for VLSI partitioning
Soliman et al. [41] gave an ant Model (SCAM) to solve graph
partitioning problem. Chen and Wang (2011), presented an
efficient genetic algorithm to solve m-way graph partitioning
problem. Galinier et al. [43] proposed a memetic algorithm,
which used both a tabu operator and a specialized crossover
operator. Kim et al.[44] discussed a number of
problem-specific issues in applying genetic algorithms to the
graph partitioning problem Kurejchik and Kazharov[45]
described the block diagram of swarm intelligence as a graph
or hypergraph Lee et al. [46] offered a novel Memetic
Quantum-Inspired Evolutionary Algorithm (MQEA)
framework. Shanavas & Gnanamurthy [47] presented a
memetic algorithm which hybrids Genetic Algorithm and
Simulated Annealing to solve the graph partitioning.
B. Findings for evolutionary approaches for digital circuit
layout problem
Non- hybrid approaches:
ill occupied to search a prescribed region of the solution
space for local optima., may take fairly long to find a good
solution
Both Non- hybrid approaches & hybrid approaches: Their
effectiveness is greatly dependent on
the representation of the solution space
initial population chosen
Crossover method used,
Population size used
Hybrid variants.
Produced quality solutions for small sized circuits
While going through the literature survey of various
evolutionary approaches genetic algorithm, memetic
algorithm, Ant based Optimization, Particle swarm
intelligence, following gaps have been traced.
Many of the surveyed evolutionary algorithms are
competitive with respect to the solution quality only.
However to be of interest in the VLSI design area, a
measure of running time must also be included. Whenever
possible, the presented approach should be compared with
state-of-the art algorithms regarding both the solution
quality and run time.
Constraints of the other algorithms have to be taken into
account, when comparing with their results. For example it
is not a fair comparison when the routing quality of an
evolutionary algorithm is expressed only in terms of number
of vias [62]and then compared with the results of other
approaches that minimize the net length concurrently.
Due to the large number of CAD tools in VLSI design,
benchmarks data are available for all major design steps,
eg.,[EDA Benchmarks (1997)].An evolutionary algorithm
developed for VLSI design will not create any interest
within the VLSI community unless its performance is tested
with the appropriate benchmarks. It is only by examining
the results of these benchmarks that we can compare a
particular evolutionary algorithm with any other given
approach. These test examples should be large, reflecting
real-world VLSI design problems.
Major work in literature has been focused on using genetic
algorithm for circuit partitioning. Less attention has been
paid on rest of evolutionary approaches.
V. CONCLUSION
Digital circuit layout is an NP hard problem. The alternate is
to exploit the heuristic methods to solve very large problems.
One of these methods is to break up the problem into sub
problems, which are then solved one after the other. Almost
always, these sub problems are NP-hard too, but they are
more amenable to heuristic solutions than is the entire layout
problem itself. The paper presents a brief study of various
graph partitioning approaches in the context of digital circuit
layout and extracts use of evolutionary approaches and their
research gaps in solving digital circuit layout based on graph
partitioning.
REFERENCES
[1] Areibi, S. & Vannelli, A. (1994) , Advanced search techniques for
circuit partitioning, In the DIMACS Series in Discrete Mathematics
and Theoretical Computer Science , pp. 77- 98.
[2] Safro, I., Sanders, P. and Schulz, C. (2012),Advanced Coarsening
Schemes for Graph Partitioning, Lecture Notes in Computer Science,
Experimental Algorithms, Volume 7276, pp 369-380.
[3] Sait S. M. , Maleh A. H., Raslan H. A. (2006), Evolutionary
algorithms for VLSI multi-objective netlist partitioning. Engineering
Applications of Artificial Intelligence, Volume 19, pp. 257-268.
4. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
2203
www.ijarcet.org
[4] Caldwell,A. E., Kahng, A. B. and Markov, I. L.a (2000), Improved
Algorithms for Hypergraph Bipartitioning. In Proceedings of the
ASP-DAC 2000, Asia and South Pacific Design Automation
Conference, Yokohama, Japan ,pp. 661-666.
[5] Deepak Batra & Dhruv Malik (2012), Partitioning Algorithms in VLSI
Physical Designs. A Review. International Journal of Advanced
Technology & Engineering Research (IJATER), Volume 2, pp. 43-47
[6] Shin, H. and Kim, C. (1993), A Simple Yet effective Technique for
Partitioning. IEEE Transactions on VLSI Systems. Volume 1, pp.
380-386.
[7] Langham, A. E. and Grant, P. W. (1999), Using competing ant
colonies to solve k-way partitioning problems with foraging and
raiding strategies, Advances in Artificial Life, Lecture Notes in
Computer Science, Springer, pp. 621–625.
[8] Kernighan, B. W. and Lin, S. (1970), An efficient heuristic procedure
for partitioning graphs. Bell Systems Technical Journal. Volume 49,
pp. 291-307.
[9] Fiduccia, C.M. and Mattheyses, R.M. (1982), A Linear-Time Heuristic
for Improving Network Partitions. In Proceedings of the 19th Design
Automation Conference, DAC '82, ACM/IEEE, Las Vegas, Nevada.
pp. 175-181.
[10] Majhi, A.K. and Patnaik, L.M .and Ramanc, S. (1995), A Genetic
Algorithm-Based Circuit Partitioner For Mcms. Microprocessing And
Microprogramming. Volume 41, pp. 83-96.
[11] Schweikert, D.G. and Kernighan, B.W. (1972), A proper model for the
partitioning of electrical circuits In Proceedings of ACM/IEEE Design
Automation Conference, New York, NY, USA, pp. 57- 62.
[12] Hagen, L.W., Huang, J. H. and Kahng, A.B. (1997), On
implementation choices for iterative improvement partitioning
algorithms. IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems.Volume 16, pp. 1199–1205
[13] Krishnamurthy,B.(1984), An Improved Min-Cut Algorithm for
Partitioning VLSI Networks. IEEE Transactions on Computers.
Volume 33, pp. 438-446.
[14] Sanchis, L. A. (1989), Multiple-Way Network Partitioning. IEEE
Transactions on Computers. Volume 38, pp. 62-81.
[15] Johnson, D.S., Aragon, C.R., McGeoch, L.A. and Schevon, C. (1989),
Optimization by Simulated Annealing: An Experimental Evaluation,
Part I, Graph Partitioning .Operation Research. Volume 37, pp.
865-892.
[16] Hadley, S. W. (1995), Approximation techniques for hypergraph
partitioning problems. Discrete Applied Mathematics. Volume 59, pp.
115- 127.
[17] Bultan,T., Aykanat,C.(1995), Circuit partitioning using mean field
annealing. Neurocomputing. Volume 8, pp. 171-194.
[18] J. Cong, W. Labio, and N. Shivakumar, „„Multi-Way VLSI Circuit
Partitioning Based on Dual Net Representation,‟‟Proc. IEEE Int’l
Conf. on Computer-Aided Design, pp. 56-62, Nov. 1994. Also
available as UCLA Computer Science Department Tech. Report
CSD-940029
[19] Yang, H.H and Wong, D.F. (1996), Efficient network flow based
min-cut balanced partitioning. IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems. Volume 15, pp.
1533-1539.
[20] Cherng, J., Chen, S. (2003), An efficient multi-level partitioning
algorithm for VLSI circuits. In Proceedings of the 16th International
Conference on VLSI Design (VLSI'03) New Delhi, India, pp.70-75.
[21] Drechsler, R. and Gunther, W. and Eschbach, T. and Linhard, L. and
Angst, G. (2003), Recursive bi-partitioning of netlists for large number
of partitions. Journal of Systems Architecture .Volume 49, pp.
521–528.
[22] Mardhana, E. and Ikeguchi, T. (2003), Neurosearch: a program library
for neural network driven search meta-heuristics [VLSI netlist
partitioning example] In Proceedings of the International Symposium
on Circuits and Systems, Bangkok, Thailand, pp. V-697- V-700.
[23] Saab, Y.G. and Rao, V.B. (1990), Fast effective heuristics for the graph
bisectioning problem, IEEE Transactions on Computer-Aided Design
of Integrated Circuits and Systems. Volume 9, pp. 91-98.
[24] Chatterjee, A.C. and Hartley, R. (1990), A new simultaneous circuit
partitioning and chip placement approach based on simulated
annealing. In Proceedings of the 27th ACM/IEEE Design Automation
Conference, Orlando, Florida, USA, pp. 36-39.
[25] Chandrasekharam, R., Subhramanian, S. and Chaudhury, S.(1993),
Genetic algorithm for node partitioning problem and applications in
VLSI design, In Proceedings of the IEE E -Computers and Digital
Techniques, Volume 140,pp. 255-260.
[26] Areibi, S. & Vannelli, A. (1993), Circuit partitioning using a Tabu
search approach. In Proceedings of the IEEE International Symposium
on Circuits and Systems, Chicago, Illinois, USA, Volume 3, pp. 1643
-1646.
[27] Alpert C. J., Hagen L. W. and Kahng A. B. (1996), A Hybrid
Multilevel/Genetic Approach for Circuit Partitioning, In Proceedings
of the IEEE Asia Pacific Conference on Circuits and Systems, Las
Vegas, NV, pp. 298-301.
[28] Merz, P. and Freisleben, B. (2000), Fitness landscapes, memetic
algorithms, and greedy operators for graph bipartitioning.
Evolutionary Computation. Volume 8, pp. 61-91.
[29] Kim, J.P., Kim, Y. H. and Moon, B. R. (2004), A hybrid genetic
approach for circuit bipartitioning. In Proceedings of the Genetic and
Evolutionary Computation Conference (GECCO), Lecture Notes in
Computer Science Volume 3103 , Seattle, WA, USA, pp. 1054–1064.
[30] Kucukpetek, S., Ploat, F. and Oguztuzun, O. (2005), Multilevel graph
partitioning: an evolutionary approach. Journal of the Operational
Research Society. Volume 56, pp. 549–562.
[31] Ganesh K. Venayagamoorthy et al., (2006), “Particle swarm-based
optimal partitioning algorithm for combinational CMOS circuits”,
Engineering Applications of Artificial Intelligence. Volume
20,177-184.
[32] Martin, J. G. (2006), Spectral techniques for graph bisection in genetic
algorithms. In Proceedings of the 8th Annual Conference on Genetic
and Evolutionary Computation . Seattle, Washington, USA, pp.
1249-1256
[33] Sun, L., Leng, M.(2007),An Effective Multi-level Algorithm Based on
Simulated Annealing for Bisecting Graph , Lecture Notes in Computer
Science ,Energy Minimization Methods in Computer Vision and
Pattern Recognition, Volume 4679, pp. 1-12.
[34] Moraglio, A., Kim, Y. H., Yoon, Y. and Moon. B.-R. (2007),
Geometric crossovers for multiway graph partitioning. Evolutionary
Computation. Volume 15, pp. 445–474.
[35] Coe S., Areibi, S. and Moussa,M. (2007), A hardware Memetic
accelerator for VLSI circuit partitioning. Computers & Electrical
Engineering. Volume 33, pp. 233-248.
[36] Datta, D., Figueira, J. R., Fonseca C. M., Tavares-Pereira F. (2008),
Graph partitioning through a multi-objective evolutionary algorithm: a
preliminary study. GECCO 2008: Atlanta, Georgia, USA 625-632.
[37] Leng, M., Yu, S., Ding, W., Guo, Q. (2008), An effective multi-level
algorithm based on ant colony optimization for graph bipartition.
Journal of Shanghai University, Volume 12, pp 426-432.
[38] Armstrong E., Grewal W. G, Areibi ,S., Darlington, G.(2010), An
investigation of parallel memetic algorithms for VLSI circuit
partitioning on multi-core computers. In Proceedings of the 23rd
Canadian Conference on Electrical and Computer Engineering
(CCECE), Calgary, Alberta, Canada, pp. 1-6.
[39] Subbaraj,P., Saravanasankar,S. and Anand, S. (2010),Combinatorial
Optimization in VLSI Hyper-graph Partitioning using Taguchi
Methods, International Journal of Mathematical
Combinotrics,Volume 3,pp. 69-84.
[40] Peng, S., Chen G., Guo,W. (2010) ,A Multi-objective Algorithm
Based on Discrete PSO for VLSI Partitioning Problem Quantitative
Logic and Soft Computing 2010 Advances in Intelligent and Soft
Computing Volume 82, , pp 651-660.
[41] Soliman,M. S., Tan G.(2010), Graph Partitioning Using Improved Ant
Clustering, Advances in Swarm Intelligence, Lecture Notes in
Computer Science ,Volume 6145, pp. 231-240.
[42] Chen. Z. Q., Wang, R.L. (2011), Solving the m-way graph partitioning
problem using a genetic algorithm. IEEE Transactions on Electrical
and Electronic Engineering. Volume 6, pp. 483–489.
[43] Galinier,P.,Boujbel, Z., Fernandes, M. C.(2011), An efficient memetic
algorithm for the graph partitioning problem. Annals of Operations
Research, Volume 191, pp.1-22.
[44] Kim, J., Hwang, I., Kim, Y. H., and Moon, B. R.(2011) Genetic
approaches for graph partitioning: a survey. In Proceedings of the
Annual Conference on Genetic and Evolutionary Computation,
Dublin, Ireland, pp. 473–480.
[45] Kurejchik V.M., Kazharov A.A.(2012) Algorithms of evolutionary
swarm intelligence for solving graph partition problem In Proceedings
of the Problems of Perspective Micro- and Nanoelectronic Systems
Development – 2012, Moscow, IPPM RAS. P. 237-242.
[46] Lee, D., Ahn, J., and Choi, K. (2012), A Memetic Quantum-Inspired
Evolutionary Algorithm for Circuit Bipartitioning Problem. In
Proceedings of the International System On Chip Design Conference
(ISOCC), Jeju, Korea, pp. 159 – 162.
5. ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
2204
[47] Shanavas, H. and Gnanamurthy,R.K., (2012), Physical Design
Optimization Using Evolutionary Algorithms..International Journal of
Computer and Electrical Engineering. Volume 4, pp. 373-379.
[48] D.H. Ackley, A Connectionist Machine for Genetic Hill climbing
(kluwer,Dordrecht,1987).
[49] E. G. Talbi and P. Bessiere. A parallel genetic algorithm for the graph
partitioning problem. In Proceedings of the ACM International
Conference on Supercomputing, ACM, Cologne, 1991
[50] W. A. Greene. Genetic algorithms for partitioning sets. International
Journal on Artificial Intelligence Tools, 10(1-2):225–241, 2001
[51] Ackley, D.H. (1987), A Connectionist Machine for Genetic Hill
climbing (Kluwer, Dordrecht).
[52] Ercal, F., Ramanujam, J. and Sadayappan, P. (1990), Task allocation
onto a hypercube by recursive mincut bipartitioning. Journal of
Parallel and Distributed Computing. Volume 10, pp. 35–44.
[53] Simon, H. D. and Teng, S.H. (1997), How good is recursive bisection
.SIAM Journal on Scientific Computing, Volume 18, pp. 1436–1445.
[54] Pothen, A., Simon, H. D. and Liou, K.P.(1990), Partitioning sparse
matrices with eigenvectors of graphs. SIAM J. Matrix Anal. Appl.,
Volume 11, pp. 430–452.
[55] Hagen, L.W., Kahng, A. B. (1992), New spectral methods for ratio cut
partitioning and clustering. IEEE Transactions on CAD of Integrated
Circuits and Systems. Volume 11, pp. 1074-1085.
[56] Hendrickson, B. and Leland,R.(1995), A multi-level algorithm for
partitioning graphs. In Supercomputing, In the Proceedings of the
IEEE/ACM SC95 Conference
[57] Vemuri, Ram., Kumar, N. and Ranga Vemuri,Two Randomized
Algorithms for Mulichip Partitioning Under Multiple Constraints,
Tech. Report TM-ECE-DDE-94-36, Univ. of Cincinnati, 1994.
[58] Langham, A. E. and Grant, P. W. (1999), Using competing ant
colonies to solve k-way partitioning problems with foraging and
raiding strategies, Advances in Artificial Life, Lecture Notes in
Computer Science, Springer, pp. 621–625.
[59] Cincotti, A., Cutello, V. and Pavone, M. (2002), Graph partitioning
using genetic algorithms with ODPX. In IEEE Congress on
Evolutionary Computation, Honolulu, Hawaii USA, pp. 402-406.
[60] Sait S. M. , Maleh A. H., Raslan H. A. (2006), Evolutionary
algorithms for VLSI multi-objective netlist partitioning. Engineering
Applications of Artificial Intelligence, Volume 19, pp. 257-268.
[61] Farshbaf M. and Derakhshi M. R. M.(2009), Multi-objective
optimization of graph partitioning using genetic algorithms. In
Proceedings of the Advanced Engineering Computing and
Applications in Sciences, 2009. ADVCOMP '09. Sliema, Malta, pp.
1-6.
[62] Geraci, M.,Orlando, P.,Sorbello,F. and G. Vasallo,G.(1991),A Genetic
Algorithm for the Routing of VLSI Circuits, Euro ...pp. 218-223.
[63] George Karypis, Eui-Hong Han, and Vipin Kumar,(1999)
CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic
Modeling, IEEE Computer, Vol. 32, No. 8, August, 1999. pp. 68-75.
[64] K. Kohmoto, K. Katayama, and H. Narihisa, “Performance of a genetic
algorithm for the graph partitioning problem,” Mathematical &
Computer Modelling, vol. 38, no. 11-13, pp. 1325-1332, Dec. 2003.