Defects and faults arise from physical imperfections and noise susceptibility of the analog circuit components used to create digital circuits resulting in computational errors. A probabilistic computational model is needed to quantify and analyze the effect of noisy signals on computational accuracy in digital circuits. This model computes the reliability of digital circuits meaning that the inputs and outputs and their implemented logic function need to be calculated probabilistically. The purpose of this paper is to present a new architecture for designing noise-tolerant digital circuits. The approach we propose is to use a class of single-input, single-output circuits called Reliability Enhancement Network Chain (RENC). A RENC is a concatenation of n simple logic circuits called Reliability Enhancement Network (REN). Each REN can increase the reliability of a digital circuit to a higher level. Reliability of the circuit can approach any desirable level when a RENC composed of a sufficient number of RENs is employed. Moreover, the proposed approach is applicable to the design of any logic circuit implemented with any logic technology.
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
A Novel Algorithm on Wavelet Based Robust Invisible Digital Image Watermarkin...IJERA Editor
This paper presents a new algorithm on waveletbased robust and invisible digital image watermarking for multimedia security. The proposed algorithm has been designed, implemented and verified using MATLAB R2014a simulation for both embedding and extraction of the watermark and the results of which shows significant improvement in performance metrics like PSNR, SSIM, Mean Correlation, MSE than the other existing algorithms in the current literature. The cover image considered here in our algorithm is of the size (256x256) and the binary watermark image size is taken as (16x16).
Network Coding for Distributed Storage Systems(Group Meeting Talk)Jayant Apte, PhD
Reviews work of Koetter et al. and Dimakis et al.
The former provides an algebraic framework for linear network coding. The latter reduces the so called repair problem to single-source multicast network-coding problem and shows that there is a tradeoff between amount of data stored in a distributed sturage system and amount of data transfer required to repair the system if a node(hard-drive) fails.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
A Novel Algorithm on Wavelet Based Robust Invisible Digital Image Watermarkin...IJERA Editor
This paper presents a new algorithm on waveletbased robust and invisible digital image watermarking for multimedia security. The proposed algorithm has been designed, implemented and verified using MATLAB R2014a simulation for both embedding and extraction of the watermark and the results of which shows significant improvement in performance metrics like PSNR, SSIM, Mean Correlation, MSE than the other existing algorithms in the current literature. The cover image considered here in our algorithm is of the size (256x256) and the binary watermark image size is taken as (16x16).
HardNet: Convolutional Network for Local Image DescriptionDmytro Mishkin
We introduce a novel loss for learning local feature descriptors which is inspired by the Lowe's matching criterion for SIFT. We show that the proposed loss that maximizes the distance between the closest positive and closest negative patch in the batch is better than complex regularization methods; it works well for both shallow and deep convolution network architectures. Applying the novel loss to the L2Net CNN architecture results in a compact descriptor -- it has the same dimensionality as SIFT (128) that shows state-of-art performance in wide baseline stereo, patch verification and instance retrieval benchmarks. It is fast, computing a descriptor takes about 1 millisecond on a low-end GPU.
Novel Tree Structure Based Conservative Reversible Binary Coded Decimal Adder...VIT-AP University
Reversible logic has been recognized as one of the most promising technique for the realization of the quantum circuit. In this paper, a cost effective conservative, reversible binary coded decimal
(BCD) adder is proposed for the quantum logic circuit. Towards the realization of BCD adder, few novel gates, such as Half-Adder/Subtraction (HAS-PP), Full-Adder/Subtraction (FAS-PP) and Overflow-detection (OD-PP) based on parity preserving logic are synthesized which incurs 7, 10 and
13 quantum cost respectively. Coupling these gates a novel tree-based methodology is proposed to
implement the required BCD Adder. Also, the BCD adder design has been optimized to achieve the optimum value of quantum cost. In addition, the proposed BCD circuit is extended to n-bit adder using replica based techniques. Experimental result establishes the novelty of the proposed logic,
which outperforms the conventional circuits in terms of logic synthesis and testability. The limitation
of detecting the missing gate and missing control point of the quantum circuit of overflow detection
is finally tackled this work by the proposed OD-PP with the application of the minimum test vector. In addition, reversible circuits of control inputs based testable master-slave D-FF is intended. The noted work on the testable sequential circuit presented here is to develop circuit using minimum test vectors and can find diverse application in the testing paradigm
Elliptic Curve Cryptography (ECC) provides a secure
means of key exchange between communicating nodes using the
Diffie-Hellman (DH) Key Exchange algorithm. This work
presents an ECC encryption implementation using of the DH
key exchange algorithm. Both encryption and decryption of text
messages using this algorithm, have been attempted. In ECC,
encoding is carried out by mapping a message character to an
affine point on an elliptic curve. It can be observed from the
comparison of the proposed algorithm and Koblitz’s encoding
method, that the proposed algorithm is as secure as Koblitz’s
encoding method and the proposed algorithm has less
computational complexity as the encoding phase is eliminated
altogether. Hence, energy efficiency of the crypto system is
improved and the same can be used in resource constrained
applications, such as Wireless sensor networks (WSNs). It is
almost infeasible to attempt a brute force attack. The security
strength of the algorithm is proportional to the key length.
However, any increase in the key length results in more
communication overhead due to encryption.
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementationinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
Performance Analysis of Encryption Algorithm for Network Security on Parallel...ijsrd.com
Nowadays the typical desktop computer processors have four or more independent CPU core, which are called as multi-core processors to execute instructions. So parallel programming language come into play to execute instructions concurrently for multi core architecture using openMP. Users prefer cryptographic algorithms to encrypt and decrypt data in order to send it securely over an unsafe environment like the internet. This paper describes the implementation and test results of Caesar cipher and RSA cryptographic algorithms in parallelization are done using OpenMP API 3.1 Standard and performance Analysis. According to our test results, the parallel design approach for security algorithm exhibits improved performance over the sequential approach in terms of execution of time
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
In recent years, cooperative communication is a hot topic of research and it is a powerful physical layer
technique to combat fading in wireless relaying scenario. Concerning with the physical layer issues, in this
paper it is focussed on with providing a better space time block coding (STBC) scheme and incorporating it
in the cooperative relaying nodes to upgrade the system performance. Recently, the golden codes have
proven to exhibit a superior performance in a wireless MIMO (Multiple Input Multiple Output) scenario
than any other code. However, a serious limitation associated with it is its increased decoding complexity.
This paper attempts to resolve this challenge through suitable modification of golden code such that a less
complex sphere decoder could be used without much compromising the error rates. The decoder complexity
is analyzed through simulation and it proves to exhibit less complexity compared to the conventional
(Maximum likelihood) ML decoder. The single relay cooperative STBC consisting of source, relay and
destination are considered. The cooperative protocol strategy considered in the relay node is Decode and
forward (DF) protocol. The proposed modified golden code with less complex sphere decoder is
implemented in the nodes of the cooperative relaying system to achieve better performance in the system.
The simulation results have validated the effectiveness of the proposed scheme by offering better BER
performance, minimum outage probability and increased spectral efficiency compared to the non
cooperative transmission method.
Fpga based efficient multiplier for image processing applications using recur...VLSICS Design
The Digital Image processing applications like medical imaging, satellite imaging, Biometric trait images
etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques
introduce errors in the output with consumption of more time, hence error free high speed multipliers has
to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier
(REFMLM) for image Filters. The 2x2 error free Mitchell log multiplier is designed with zero error by
introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM)
Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers
using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log multiplier.
The 8x8 REFMLM is tested for Gaussian filter to remove noise in fingerprint image. The Multiplier is
synthesized using Spartan 3 FPGA family device XC3S1500-5fg320. It is observed that the performance
parameters such as area utilization, speed, error and PSNR are better in the case of proposed architecture
compared to existing architectures.
User Category Based Estimation of Location Popularity using the Road GPS Traj...Waqas Tariq
The mining of the user GPS trajectories and identifying the interesting places have been well studied based on the visitor’s frequency. However, every user is given the same importance in the majority of the trajectory mining methods. In reality, the popularity of the place also depends on the category of the visitor i.e. international vs local visitors etc. We are proposing user category based location popularity estimation using the trajectories databases. It includes mainly three steps. First , pre-processing – the error correction and the graph connection establishment in the road network in order to be able to carry the graph based computations. Second , find the stay regions where the travelers spent some time off-the-road. The visitors can be easily categorized for each POI based on the travel distance from the home location. Finally , normalization and popularity estimation – measure the frequency and stay time of the visitors of each category in the places in question. The weighted sum of the frequency and stay time for each category of the visitors is calculated. The final popularity of the places is computed with values of the pre-configured range. We have implemented and evaluated the proposed method using a large real road GPS trajectory of 182 users that was collected in a period of over three years by Microsoft Asia Research group.
Predicting e-Customer behavior in B2C Relationships for CLV modelWaqas Tariq
E-Commerce sales have demonstrated an amazing growth in the last few years. And it is thus clear that the web is becoming an increasingly important channel and companies should strive for a successful web site. In this completion knowing e-customer and predicting his behavior is very important. In this paper we describe e-customer behavior in B2C relationships and then according to this behavior a new model for evaluating e-customer in B2C e-commerce relationships will be described. The most important thing in our e-CLV (Electronic Customer Lifetime Value) model is considering market\'s risks that are affecting customer cash flow in future. A lot of CLV models are based on simple NPV (simple net present value). However simple NPV can assess a good value for CLV, but simple NPV ignores two important aspects of B2C e-relationship which are market risks and big amount of customer data in e-commerce context. Therefore, simple NPV isn\'t enough for assessing e-CLV in high risk B2C markets. Instead of NPV, real option analyses could lead us to a better estimation for future cash flow of customers. With real option analyses, we predict all the future states with probability of each of them. And then calculate the more accurate of future customer cash flow. In this paper after a brief history of CLV, we explain customer behavior in B2C markets especially for e-retailers. Then with using real option analyses, we introduce our CLV model. Two extended examples explain our model and introduce the steps in finding CLV of customer in a B2C relationship.
HardNet: Convolutional Network for Local Image DescriptionDmytro Mishkin
We introduce a novel loss for learning local feature descriptors which is inspired by the Lowe's matching criterion for SIFT. We show that the proposed loss that maximizes the distance between the closest positive and closest negative patch in the batch is better than complex regularization methods; it works well for both shallow and deep convolution network architectures. Applying the novel loss to the L2Net CNN architecture results in a compact descriptor -- it has the same dimensionality as SIFT (128) that shows state-of-art performance in wide baseline stereo, patch verification and instance retrieval benchmarks. It is fast, computing a descriptor takes about 1 millisecond on a low-end GPU.
Novel Tree Structure Based Conservative Reversible Binary Coded Decimal Adder...VIT-AP University
Reversible logic has been recognized as one of the most promising technique for the realization of the quantum circuit. In this paper, a cost effective conservative, reversible binary coded decimal
(BCD) adder is proposed for the quantum logic circuit. Towards the realization of BCD adder, few novel gates, such as Half-Adder/Subtraction (HAS-PP), Full-Adder/Subtraction (FAS-PP) and Overflow-detection (OD-PP) based on parity preserving logic are synthesized which incurs 7, 10 and
13 quantum cost respectively. Coupling these gates a novel tree-based methodology is proposed to
implement the required BCD Adder. Also, the BCD adder design has been optimized to achieve the optimum value of quantum cost. In addition, the proposed BCD circuit is extended to n-bit adder using replica based techniques. Experimental result establishes the novelty of the proposed logic,
which outperforms the conventional circuits in terms of logic synthesis and testability. The limitation
of detecting the missing gate and missing control point of the quantum circuit of overflow detection
is finally tackled this work by the proposed OD-PP with the application of the minimum test vector. In addition, reversible circuits of control inputs based testable master-slave D-FF is intended. The noted work on the testable sequential circuit presented here is to develop circuit using minimum test vectors and can find diverse application in the testing paradigm
Elliptic Curve Cryptography (ECC) provides a secure
means of key exchange between communicating nodes using the
Diffie-Hellman (DH) Key Exchange algorithm. This work
presents an ECC encryption implementation using of the DH
key exchange algorithm. Both encryption and decryption of text
messages using this algorithm, have been attempted. In ECC,
encoding is carried out by mapping a message character to an
affine point on an elliptic curve. It can be observed from the
comparison of the proposed algorithm and Koblitz’s encoding
method, that the proposed algorithm is as secure as Koblitz’s
encoding method and the proposed algorithm has less
computational complexity as the encoding phase is eliminated
altogether. Hence, energy efficiency of the crypto system is
improved and the same can be used in resource constrained
applications, such as Wireless sensor networks (WSNs). It is
almost infeasible to attempt a brute force attack. The security
strength of the algorithm is proportional to the key length.
However, any increase in the key length results in more
communication overhead due to encryption.
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementationinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
STUDY OF TASK SCHEDULING STRATEGY BASED ON TRUSTWORTHINESS ijdpsjournal
MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
International Refereed Journal of Engineering and Science (IRJES)irjes
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
Performance Analysis of Encryption Algorithm for Network Security on Parallel...ijsrd.com
Nowadays the typical desktop computer processors have four or more independent CPU core, which are called as multi-core processors to execute instructions. So parallel programming language come into play to execute instructions concurrently for multi core architecture using openMP. Users prefer cryptographic algorithms to encrypt and decrypt data in order to send it securely over an unsafe environment like the internet. This paper describes the implementation and test results of Caesar cipher and RSA cryptographic algorithms in parallelization are done using OpenMP API 3.1 Standard and performance Analysis. According to our test results, the parallel design approach for security algorithm exhibits improved performance over the sequential approach in terms of execution of time
Hamming net based Low Complexity Successive Cancellation Polar DecoderRSIS International
This paper aims to implement hybrid based Polar
encoder using the knowledge of mutual information and channel
capacity. Further a Hamming weight successive cancellation
decoder is simulated with QPSK modulation technique in
presence of additive white gaussian noise. The experimentation
performed with the effect of channel polarization has shown that
for 256- bit data stream, 30% channels has zero bit and 49%
channels are with a one bit capacity. The decoding complexity is
reduced to almost half as compared to conventional successive
cancellation decoding algorithm. However, the required SNR of
7 dB is achieved at the targeted BER of 10 -4. The penalty paid is
in terms of training time required at the decoding end.
In recent years, cooperative communication is a hot topic of research and it is a powerful physical layer
technique to combat fading in wireless relaying scenario. Concerning with the physical layer issues, in this
paper it is focussed on with providing a better space time block coding (STBC) scheme and incorporating it
in the cooperative relaying nodes to upgrade the system performance. Recently, the golden codes have
proven to exhibit a superior performance in a wireless MIMO (Multiple Input Multiple Output) scenario
than any other code. However, a serious limitation associated with it is its increased decoding complexity.
This paper attempts to resolve this challenge through suitable modification of golden code such that a less
complex sphere decoder could be used without much compromising the error rates. The decoder complexity
is analyzed through simulation and it proves to exhibit less complexity compared to the conventional
(Maximum likelihood) ML decoder. The single relay cooperative STBC consisting of source, relay and
destination are considered. The cooperative protocol strategy considered in the relay node is Decode and
forward (DF) protocol. The proposed modified golden code with less complex sphere decoder is
implemented in the nodes of the cooperative relaying system to achieve better performance in the system.
The simulation results have validated the effectiveness of the proposed scheme by offering better BER
performance, minimum outage probability and increased spectral efficiency compared to the non
cooperative transmission method.
Fpga based efficient multiplier for image processing applications using recur...VLSICS Design
The Digital Image processing applications like medical imaging, satellite imaging, Biometric trait images
etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques
introduce errors in the output with consumption of more time, hence error free high speed multipliers has
to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier
(REFMLM) for image Filters. The 2x2 error free Mitchell log multiplier is designed with zero error by
introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM)
Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers
using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log multiplier.
The 8x8 REFMLM is tested for Gaussian filter to remove noise in fingerprint image. The Multiplier is
synthesized using Spartan 3 FPGA family device XC3S1500-5fg320. It is observed that the performance
parameters such as area utilization, speed, error and PSNR are better in the case of proposed architecture
compared to existing architectures.
User Category Based Estimation of Location Popularity using the Road GPS Traj...Waqas Tariq
The mining of the user GPS trajectories and identifying the interesting places have been well studied based on the visitor’s frequency. However, every user is given the same importance in the majority of the trajectory mining methods. In reality, the popularity of the place also depends on the category of the visitor i.e. international vs local visitors etc. We are proposing user category based location popularity estimation using the trajectories databases. It includes mainly three steps. First , pre-processing – the error correction and the graph connection establishment in the road network in order to be able to carry the graph based computations. Second , find the stay regions where the travelers spent some time off-the-road. The visitors can be easily categorized for each POI based on the travel distance from the home location. Finally , normalization and popularity estimation – measure the frequency and stay time of the visitors of each category in the places in question. The weighted sum of the frequency and stay time for each category of the visitors is calculated. The final popularity of the places is computed with values of the pre-configured range. We have implemented and evaluated the proposed method using a large real road GPS trajectory of 182 users that was collected in a period of over three years by Microsoft Asia Research group.
Predicting e-Customer behavior in B2C Relationships for CLV modelWaqas Tariq
E-Commerce sales have demonstrated an amazing growth in the last few years. And it is thus clear that the web is becoming an increasingly important channel and companies should strive for a successful web site. In this completion knowing e-customer and predicting his behavior is very important. In this paper we describe e-customer behavior in B2C relationships and then according to this behavior a new model for evaluating e-customer in B2C e-commerce relationships will be described. The most important thing in our e-CLV (Electronic Customer Lifetime Value) model is considering market\'s risks that are affecting customer cash flow in future. A lot of CLV models are based on simple NPV (simple net present value). However simple NPV can assess a good value for CLV, but simple NPV ignores two important aspects of B2C e-relationship which are market risks and big amount of customer data in e-commerce context. Therefore, simple NPV isn\'t enough for assessing e-CLV in high risk B2C markets. Instead of NPV, real option analyses could lead us to a better estimation for future cash flow of customers. With real option analyses, we predict all the future states with probability of each of them. And then calculate the more accurate of future customer cash flow. In this paper after a brief history of CLV, we explain customer behavior in B2C markets especially for e-retailers. Then with using real option analyses, we introduce our CLV model. Two extended examples explain our model and introduce the steps in finding CLV of customer in a B2C relationship.
Evaluation Performance of 2nd Order Nonlinear System: Baseline Control Tunabl...Waqas Tariq
Design a nonlinear controller for second order nonlinear uncertain dynamical systems (e.g., internal combustion engine) is one of the most important challenging works. This paper focuses on the comparative study between two important nonlinear controllers namely; computed torque controller (CTC) and sliding mode controller (SMC) and applied to internal combustion (IC) engine in presence of uncertainties. In order to provide high performance nonlinear methodology, sliding mode controller and computed torque controller are selected. Pure SMC and CTC can be used to control of partly known nonlinear dynamic parameters of IC engine. Pure sliding mode controller and computed torque controller have difficulty in handling unstructured model uncertainties. To solve this problem applied linear error-based tuning method to sliding mode controller and computed torque controller for adjusting the sliding surface gain (ë ) and linear inner loop gain (K). Since the sliding surface gain (ë) and linear inner loop gain (K) are adjusted by linear error-based tuning method. In this research new ë and new K are obtained by the previous ë and K multiple gains updating factor(á). The results demonstrate that the error-based linear SMC and CTC are model-based controllers which works well in certain and uncertain system. These controllers have acceptable performance in presence of uncertainty.
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
Software cost estimation deals with the financial and strategic planning of software projects. Controlling the expensive investment of software development effectively is of paramount importance. The limitation of algorithmic effort prediction models is their inability to cope with uncertainties and imprecision surrounding software projects at the early development stage. More recently, attention has turned to a variety of machine learning methods, and soft computing in particular to predict software development effort. Fuzzy logic is one such technique which can cope with uncertainties. In the present paper, Particle Swarm Optimization Algorithm (PSOA) is presented to fine tune the fuzzy estimate for the development of software projects . The efficacy of the developed models is tested on 10 NASA software projects, 18 NASA projects and COCOMO 81 database on the basis of various criterion for assessment of software cost estimation models. Comparison of all the models is done and it is found that the developed models provide better estimation
Faster Case Retrieval Using Hash Indexing TechniqueWaqas Tariq
The main objective of case retrieval is to scan and to map the most similar old cases in case base with a new problem. Beside accurateness, the time taken to retrieve case is also important. With the increasing number of cases in case base, the retrieval task is becoming more challenging where faster retrieval time and good accuracy are the main aim. Traditionally, sequential indexing method has been applied to search for possible cases in case base. This technique worked fast when the number of cases is small but requires more time to retrieve when the number of data in case base grows. As an alternative, this paper presents the integration of hashing indexing technique in case retrieval to mine large cases and speed up the retrieval time. Hashing indexing searches a record by determining the index using only an entry’s search key without traversing all records. To test the proposed method, real data namely Timah Tasoh Dam operational dataset, which is temporal in nature that represents the historical hydrological data of daily Timah Tasoh dam operation in Perlis, Malaysia ranging from year 1997-2005, was chosen as experiment. Then, the hashing indexing performance is compared with sequential method in term of retrieval time and accuracy. The finding indicates that hashing indexing is more accurate and faster than sequential approach in retrieving cases. Besides that, the combination of hashing search key x produces better result compared to single search key.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Next-Generation Search Engines for Information RetrievalWaqas Tariq
In the recent years, there have been significant advancements in the areas of scientific data management and retrieval techniques, particularly in terms of standards and protocols for archiving data and metadata. Scientific data is generally rich, not easy to understand, and spread across different places. In order to integrate these pieces together, a data archive and associated metadata should be generated. This data should be stored in a format that can be locatable, retrievable and understandable, more importantly it should be in a form that will continue to be accessible as technology changes, such as XML. New search technologies are being implemented around these protocols, which makes searching easy, fast and yet robust. One such system, Mercury, a metadata harvesting, data discovery, and access system, built for researchers to search to, share and obtain spatiotemporal data used across a range of climate and ecological sciences.
Reusability Metrics for Object-Oriented System: An Alternative ApproachWaqas Tariq
Object-oriented metrics plays an import role in ensuring the desired quality and have widely been applied to practical software projects. The benefits of object-oriented software development increasing leading to development of new measurement techniques. Assessing the reusability is more and more of a necessity. Reusability is the key element to reduce the cost and improve the quality of the software. Generic programming helps us to achieve the concept of reusability through C++ Templates which helps in developing reusable software modules and also identify effectiveness of this reuse strategy. The advantage of defining metrics for templates is the possibility to measure the reusability of software component and to identify the most effective reuse strategy. The need for such metrics is particularly useful when an organization is adopting a new technology, for which established practices have yet to be developed. Many researchers have done research on reusability metrics [2, 9, 3, 4]. In this paper we have proposed four new independent metrics Number of Template Children (NTC), Depth of Template Tree (DTT) Method Template Inheritance Factor (MTIF) and Attribute Template Inheritance Factor (ATIF), to measure the reusability for object-oriented systems.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Design Error-based Linear Model-free Evaluation Performance Computed Torque C...Waqas Tariq
Design a nonlinear controller for second order nonlinear uncertain dynamical systems is one of the most important challenging works. This research focuses on the design, implementation and analysis of a model-free linear error-based tuning computed torque controller for highly nonlinear dynamic second order system, in presence of uncertainties. In order to provide high performance nonlinear methodology, computed torque controller is selected. Pure computed torque controller can be used to control of partly known nonlinear dynamic parameters of nonlinear systems. Conversely, pure computed torque controller is used in many applications; it has an important drawback namely; nonlinear equivalent dynamic formulation in uncertain dynamic parameter. In order to solve the uncertain nonlinear dynamic parameters, implement easily and avoid mathematical model base controller, model-free performance/error-based linear methodology with three inputs and one output is applied to pure computed torque controller. The results demonstrate that the error-based linear tuning computed torque controller is a model-based controllers which works well in certain and uncertain system. Pure computed torque controller has difficulty in handling unstructured model uncertainties. To solve this problem applied linear model-free error -based tuning method to computed torque controller for adjusting the linear inner loop gain (K ). Since the linear inner loop gain (K) is adjusted by linear error-based tuning method, it is linear and continuous. In this research new K is obtained by the previous K multiple gain updating factor (á) which is a coefficient varies between half to two.
A multi-scale Urban Analysis Using Remote Sensing and GISWaqas Tariq
Urban planning was very much a design and engineering exercise with the state as a single stake holder. Mega cities with millions of population, has undergone a series of physical as well as socio-economic changes over the last 60 years. In India, Hyderabad experienced a high rate of urbanization facing structural, environmental, social and economic problems. To provide a holistic perspective on the urban characteristics, an interdisciplinary research approach is used. GISGeographic Information System and Remote Sensing provide the advance techniques and methods for studying urban land development and assist urban planning.
Colorization of Greyscale Images Using Kekre’s Biorthogonal Color Spaces and ...Waqas Tariq
There is no exact solution for colorization of greyscale images. The main focus of the techniques [24] is to minimise the human efforts needed in manually coloring the greyscale images. The human interaction is needed only to find a reference color image, then the job of transferring color traits from reference color image to greyscale image is done by techniques discussed in[1]. Here the colors from some source color image are picked up and squirted into the colored greyscale image. The color palette used in colorization technique discussed here is generated using the modified VQ codebook obtained by applying Kekre’s Fast Codebook generation algorithm. The technique is tested using various VQ codebook sizes like 64, 128, 256. In this papaer the techniques of color traits transfer to greyscale images are revisited with various color spaces like newly introduced Kekre’s Biorthogonal color spaces and RGB color space. The pixel window size used is of size 2x2. Color traits transfer to greyscale algorithms are tested over five different images for deciding the color space giving best quality of coloring. The experimental results show that the Kekre’s Biorthogonal Green color space gives better coloring.
Neural Network Based Classification and Diagnosis of Brain HemorrhagesWaqas Tariq
The classification and diagnosis of brain hemorrhages has work out into a great importance diligence in early detection of hemorrhages which reduce the death rates. The purpose of this research was to detect brain hemorrhages and classify them and provide the patient with correct diagnosis. A possible solution to this social problem is to utilize predictive techniques such as sparse component analysis, artificial neural networks to develop a method for detection and classification. In this study we considered a perceptron based feed forward neural network for early detection of hemorrhages. This paper attempts to spot on consider and talk about Computer Aided Diagnosis (CAD) that chiefly necessitated in clinical diagnosis without human act. This paper introduces a Region Severance Algorithm (RSA) for detection and location of hemorrhages and an algorithm for finding threshold band. In this paper different data sets (CT images) are taken from various machines and the results obtained by applying our algorithm and those results were compared with domain expert. Further researches were challenged to originate different models in study of hemorrhages caused by hyper tension or by existing tumor in the brain.
This paper expands a sliding mode fuzzy controller which sliding surface gain is on-line tuned by minimum fuzzy inference algorithm. The main goal is to guarantee acceptable trajectories tracking between the second order nonlinear system (robot manipulator) actual and the desired trajectory. The fuzzy controller in proposed sliding mode fuzzy controller is based on Mamdani’s fuzzy inference system (FIS) and it has one input and one output. The input represents the function between sliding function, error and the rate of error. The outputs represent torque, respectively. The fuzzy inference system methodology is on-line tune the sliding surface gain based on error-based fuzzy tuning methodology. Pure sliding mode fuzzy controller has difficulty in handling unstructured model uncertainties. To solve this problem applied fuzzy-based tuning method to sliding mode fuzzy controller for adjusting the sliding surface gain (ë ). Since the sliding surface gain (ë) is adjusted by fuzzy-based tuning method, it is nonlinear and continuous. Fuzzy-based tuning sliding mode fuzzy controller is stable model-free controller which eliminates the chattering phenomenon without to use the boundary layer saturation function. Lyapunov stability is proved in fuzzy-based tuning sliding mode fuzzy controller based on switching (sign) function. This controller has acceptable performance in presence of uncertainty (e.g., overshoot=0%, rise time=0.8 second, steady state error = 1e-9 and RMS error=1.8e-12).
The Adoption Of Tailor-made IT-based Accounting Systems Within Indonesian SME...Waqas Tariq
This paper examines the adoption process of computer-based accounting information systems within Indonesian SMEs. The analysis was conducted using theoretical framework proposed by Slappendel [1] which is interactive process. Interactive process argues that adoption of innovation (IT in particular) should be viewed as an interactive process between individual member, organization, and its environment. One of the emerging theories within interactive process is Actor Network Theory (ANT) proposed by Latour, Callon, and Law[2-7]. ANT classified adoption process as four stages process, namely problematisation, interrestment, enrolment, and mobilization. We used qualitative approach to gather data, analyze and draw conclusion. As the result we come up with factors that influencing the adoption process along with the failure and success of such endeavour. We argues that ANT is better in explaining the adoption of IT compared to other model due its ability to identified not only the factors but also the process of adoption and explaining the success or failure of the process.
Different Software Testing Levels for Detecting ErrorsWaqas Tariq
Software testing is the process to uncover requirement, design and coding errors in the program. But software testing is not a “miracle” that can guaranteed the production of high quality software system, so to enhance the quality of a software and to do a testing in more unified way, the testing process could be abstracted to different levels and each level of testing aims to test different aspects of the system. In my paper, I have described different level of testing and these different levels attempted to detect different types of defects. The goal here is to test the system against requirement, and to test requirement themselves.
Artificial Control of PUMA Robot Manipulator: A-Review of Fuzzy Inference Eng...Waqas Tariq
One of the most important challenges in the field of robotics is robot manipulators control with acceptable performance, because these systems are multi-input multi-output (MIMO), nonlinear and uncertainty. Presently, robot manipulators are used in different (unknown and/or unstructured) situation consequently caused to provide complicated systems, as a result strong mathematical theory are used in new control methodologies to design nonlinear robust controller with acceptable performance (e.g., minimum error, good trajectory, disturbance rejection). Classical and non-classical methods are two main categories of robot manipulators control, where the conventional (classical) control theory uses the classical method and the non-classical control theory (e.g., fuzzy logic, neural network, and neuro fuzzy) uses the artificial intelligence methods. However both of conventional and artificial intelligence theories have applied effectively in many areas, but these methods also have some limitations. This paper is focused on review of fuzzy logic controller and applied to PUMA robot manipulator.
Reactive Navigation of Autonomous Mobile Robot Using Neuro-Fuzzy SystemWaqas Tariq
Neuro-fuzzy systems have been used for robot navigation applications because of their ability to exert human like expertise and to utilize acquired knowledge to develop autonomous navigation strategies. In this paper, neuro-fuzzy based system is proposed for reactive navigation of a mobile robot using behavior based control. The proposed algorithm uses discrete sampling based optimal training of neural network. With a view to ascertain the efficacy of proposed system; the proposed neuro-fuzzy system’s performance is compared to that of neural and fuzzy based approaches. Simulation results along with detailed behavior analysis show effectiveness of our algorithm in all kind of obstacle environments.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
DESIGN OF PARITY PRESERVING LOGIC BASED FAULT TOLERANT REVERSIBLE ARITHMETIC ...VLSICS Design
Reversible Logic is gaining significant consideration as the potential logic design style for implementation
in modern nanotechnology and quantum computing with minimal impact on physical entropy .Fault
Tolerant reversible logic is one class of reversible logic that maintain the parity of the input and the
outputs. Significant contributions have been made in the literature towards the design of fault tolerant
reversible logic gate structures and arithmetic units, however, there are not many efforts directed towards
the design of fault tolerant reversible ALUs. Arithmetic Logic Unit (ALU) is the prime performing unit in
any computing device and it has to be made fault tolerant. In this paper we aim to design one such fault
tolerant reversible ALU that is constructed using parity preserving reversible logic gates. The designed
ALU can generate up to seven Arithmetic operations and four logical operations.
IMPLEMENTATION OF EFFICIENT ADDER USING MULTI VALUE LOGIC TECHNIQUEJournal For Research
The digital logic circuits are restricted for the requirement of interconnections. This difficulty overcomes by using a big set of signals over the same chip area. Multiple-valued logic (MVL) designs contain more importance from that perspective. This paper gives the fabrication of a multiple-valued half adder and full adder circuits. This technique advantageous for large scale circuits due to which large power dissipation with increased speed can lead to the development of extremely low energy circuit’s use for the high performance required for number of applications. Multiple-valued logic (MVL) designs are gaining more advantageous from the design of a multiple-valued half adder and full adder circuits. The presented adders are design in Multiple-Valued voltage-Mode Logic (MV-VML). In quaternary half adder, quaternary logic levels are exploited for the intention of addition. Addition operation is executed with minimum number of gates and less depth of net. The design is targeted for the 0.18 μm CMOS technology. Circuit is design by using Advanced Design System software {ADS software}. In this paper we try to find area, power and speed of the design HAq / FAq without any need of conversion, and compare to existing binary circuits [HAb / FAb].
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Investigating the Performance of NoC Using Hierarchical Routing ApproachIJERA Editor
The Network-on-Chip (NoC) model has appeared as a revolutionary methodology for incorporatingmany number of intellectual property (IP) blocks in a die. As said by the International Roadmap for Semiconductors (ITRS), it is must to scale down the device size. In order to reduce the device long interconnection should be avoided. For that, new interconnect patterns are need. Three-dimensional ICs are proficient of achieving superior performance, resistance against noise and lower interconnect power consumption compared to traditional planar ICs. In this paper, network data routed by Hierarchical methodology. We are analyzing total number of logic gates and registers, power consumption and delay when different bits of data transmitted using Quartus II software.
Digital Wave Formulation of Quasi-Static Partial Element Equivalent Circuit M...Piero Belforte
This presentation shows a digital wave formulation
(DWF) of the quasi-static Partial Element Equivalent Circuit
formulation. Through the use of a pertinent change of variablesand the choice of a specific implementation of PEEC cell elementsin the Digital Wave domain, the standard PEEC model istransformed into and solved as a wave digital network. The
example reported shows the accuracy and the significant speedup up to 627X of the proposed DWF-based PEEC solver when compared to the standard Spice solution.
Presented at SPI2016, Turin, May 2016.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Rooter: A Methodology for the Typical Unification
of Access Points and Redundancy
Many physicists would agree that, had it not been for
congestion control, the evaluation of web browsers might never
have occurred. In fact, few hackers worldwide would disagree
with the essential unification of voice-over-IP and public-
private key pair. In order to solve this riddle, we confirm that
SMPs can be made stochastic, cacheable, and interposable.
Performance Analysis of Encoder in Different Logic Techniques for High-Speed ...Achintya Kumar
In designing a system, we can replace cell components by appropriate technique based cell so that the noise margin of overall circuit is improved. In future we can also implement some techniques for sequential circuits.
An optimised multi value logic cell design with new architecture of many val...IJMER
Propose thesis work is a design of a Multi Logic Memory cell of four logic levels which can hold Logic 0, Logic 1, Logic 2 & Logic 3 and also propose an Interface module design between multi logic system with binary systems, thesis work can reduce the no. of wires required to parallel interface with normal memory and also can increase the speed of simple serial data transfer
This paper investigates about the possibility to reduce power consumption in Neural Network using approximated computing techniques. Authors compare a traditional fixed-point neuron with an approximated neuron composed of approximated multipliers and adder. Experiments show that in the proposed case of study (a wine classifier) the approximated neuron allows to save up to the 43% of the area, a power consumption saving of 35% and an improvement in the maximum clock frequency of 20%.
A Novel Design of 4 Bit Johnson Counter Using Reversible Logic Gatesijsrd.com
In recent years, reversible logic circuits have attracted considerable attention in improving some fields like nanotechnology, quantum computing, cryptography, optical computing and low power design of circuits due to its low power dissipating characteristic. In this paper we proposed the design of 4-bit Johnson counter which uses reversible gates and derived quantum cost, constant inputs, garbage output and number of gates to implement it.
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Interface on Usability Testing Indonesia Official Tourism WebsiteWaqas Tariq
Ministry of Tourism and Creative Economy of The Republic of Indonesia must meet the wide audience various needs and should reach people from all levels of society around the world to provide Indonesia tourism and travel information. This article will gives the details in the evolution of one important component of Indonesia Official Tourism Website as it has grown in functionality and usefulness over several years of use by a live, unrestricted community. We chose this website to see the website interface design and usability and to popularize Indonesia tourism and travel highlights. The analysis done by looking at the criteria specified for usability testing. Usability testing measures are the ease of use (effectiveness, efficiency, consistency and interface design), easy to learn, errors and syntax which is related to the human computer interaction. The purpose of this article is to test the usability level of the website, analyze the website interface design, and provide suggestions for improvements in Indonesia Official Tourism Website of analysis we have done before.
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Waqas Tariq
In this paper, a tool, called SPMonitor, to monitor and visualize of run-time execution productive processes is proposed. SPMonitor enables dynamically visualizing and monitoring workflows running in a system. It displays versatile information about currently executed workflows providing better understanding about processes and the general functionality of the domain. Moreover, SPMonitor enhances cooperation between different stakeholders by offering extensive communication and problem solving features that allow actors concerned to react more efficiently to different anomalies that may occur during a workflow execution. The ideas discussed are validated through the study of real case related to airbus assembly lines.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Reliability Improvement in Logic Circuit Stochastic Computation
1. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 22
Reliability Improvement in Logic Circuit Stochastic
Computation
Jeremy M. Lakes jlakes@ou.edu
Electrical and Computer Engineering
University of Oklahoma
Norman, OK 73019 USA
Dr. Samuel C. Lee samlee@ou.edu
Electrical and Computer Engineering
University of Oklahoma
Norman, OK 73019 USA
Abstract
Defects and faults arise from physical imperfections and noise susceptibility of the analog
circuit components used to create digital circuits resulting in computational errors. A
probabilistic computational model is needed to quantify and analyze the effect of noisy signals
on computational accuracy in digital circuits. This model computes the reliability of digital
circuits meaning that the inputs and outputs and their implemented logic function need to be
calculated probabilistically. The purpose of this paper is to present a new architecture for
designing noise-tolerant digital circuits. The approach we propose is to use a class of single-
input, single-output circuits called Reliability Enhancement Network Chain (RENC). A RENC
is a concatenation of n simple logic circuits called Reliability Enhancement Network (REN).
Each REN can increase the reliability of a digital circuit to a higher level. Reliability of the
circuit can approach any desirable level when a RENC composed of a sufficient number of
RENs is employed. Moreover, the proposed approach is applicable to the design of any logic
circuit implemented with any logic technology.
Keywords: Digital Design, Fault Tolerance, Reliability, Stochastic Model, Probabilistic Model
and Noisy Signals.
1. INTRODUCTION
One of the pioneers of reliable circuit design was J. von Neumann in 1952[1]. Von Neumann
created a technique called multiplexing that is capable of designing reliable computing units
using only unreliable devices[1]. This technique consists of two stages called the executive
stage and restorative stage. The executive stage replaces a processing unit with N
multiplexed processing units that have N copies of each input and N copies of each output of
the unit. The outputs from the executive stage are then fed into the restorative stage which
stifles the errors present in the executive stage. The most popular example of this is Von
Neumann’s NAND multiplexing. With the assumption that NAND gates were not reliable and
that they failed at a constant rate, Von Neumann discovered that he could treat a bundle of
unreliable gates as an ideal reliable gate if gate failures were sufficiently small and
independent. He calculated an upper bound of 0.0107 as the maximum tolerable rate of gate
failure for his method to work. However, in 1998 W. Evans and N. Pippenger found that the
upper bound for failure of each NAND gate is about .08856 [2]. This multiplexing method is
effective at suppressing both permanent and transient faults due to the large amounts of
redundancy used.
Von Neumann’s research was furthered in 1977 when a group showed that logarithmic
redundancy is sufficient to implement most Boolean functions[3]. Later that same year they
also proved that some Boolean functions require at least logarithmic redundancy [4]. Almost a
decade later Nicholas Pippenger proved that a Boolean function with O(n) noiseless gates
can be reliably computed using only O(c) gates [5], [6]. This means, of course, that the
number of additional noisy gates needed to reliably implement a Boolean function differs only
by a multiplicative constant. Then in 1988, Pippenger showed that the fault probability of each
gate must be below .5 for von Neumann’s method to work [7]. Additionally, he explained that
2. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 23
more layers of redundancy are needed to compute reliably in a noisy environment [7].
Overall, R. L. Dobrushin, S. I. Ortyukov and N. Pippenger strengthened von Neumann’s
theories by providing changes and rigorous proofs to his work.
Much more recently groups have furthered von Neumann’s multiplexing method by using
Probabilistic Model Checking (PMC) to calculate the redundancy-reliability tradeoffs when
designing digital circuits [8], [9]. PMC is an algorithmic procedure that determines if a given
stochastic system satisfies probabilistic specifications. They have developed a defect-tolerant
CAD framework, called NANOPRISM, using PMC so that optimized systems can be
simulated with a desired level of reliability built into the circuit [8]. Then they furthered their
research by showing that NAND multiplexing can achieve higher reliability at lower
redundancy rates than what von Neumann originally proposed [9]. This system is capable of
taking an entire Boolean network as input and to evaluate the reliability-redundancy tradeoffs
so that a circuit designer can decide based on the cost of reliability. The research given in [8]
and [9] is focused on the study of correcting reliability issues stemming from both permanent
and transient faults.
Other groups have recently reexamined other types of reliability improvement for digital
circuits [10], [11]. They explore the statistical characteristics of R-Fold modular redundancy
and reconfigurable computing as options for improving reliability. Additionally, Han and Jonker
discovered a system architecture based on NAND multiplexing that can increase reliability of
a circuit to a satisfactory level [10]. This method, however, is only viable for ultra large scale
integrations due to the amount of redundancy needed. They also discuss the reliability
improvement technique of reconfigurable computing and discover that it requires even more
redundancy than NAND multiplexing. Finally, Nikolik et. al. explores the possibility of using R-
Fold modular redundancy [11]. This technique uses a number, R, of redundant processing
units that have their outputs fed into a majority gate, where R=3, 5, 7, 9, ···. The majority gate
will then output the value of the most frequently calculated Boolean value and chooses that as
the final output. Han and Jonker show statistically that R-Fold modular redundancy and
NAND-multiplexing are generally less effective than reconfiguration for protecting against
manufacturing errors [10]. However, reconfiguration is virtually useless for protecting against
transient errors. A comprehensive survey of techniques and architectures for providing defect
and fault tolerance to digital circuits can be found in chapter 10 of a recently published book
[12].
All the groups mentioned above have studied and provided solutions to the reliability problem
in circuits caused by transient and permanent faults in computational circuits. However, no
group has provided a solution to this reliability issue by focusing on correcting faults due
primarily to noisy input signals. In this paper we present a design approach that can nullify the
effects of temporary and permanent noise of any digital circuit. Our proposed approach
provides a noise-tolerant architecture for designing reliable digital circuits that operate in noisy
environments. The paper is organized as follows:
Section 2 introduces the stochastic/probabilistic behavior of digital circuits. The study of the
probabilistic behavior of a 1-bit half adder in the stochastic domain reveals that the reliability
of digital circuits with noisy signals is generally low. Section 3 discusses a class of digital
circuits, called Reliability Enhancement Networks (REN) that can enhance reliability of digital
circuits. A single REN can increase the reliability of a digital circuit, but greater reliability can
be achieved when multiple RENs are cascaded in series to form Reliability Enhancement
Network Chains (RENC) which is also presented in Section 3. Section 4 shows a software
simulation of RENC in action employed on a 1-bit half adder and then a conclusion is
delivered.
2. PROBABILISTIC BEHAVIOR OF DIGITAL CIRCUITS IN A NOISY
ENVIRONMENT
Since the advent of digital circuitry not only the size of circuits has been reduced, but also the
design approach has changed drastically[12], [13]. All input and output signals of noisy digital
circuits are considered to be random variables due to the high level of noise bombarding input
signals which creates a high signal to noise ratio[15], [16]. Therefore, a probabilistic model is
needed to calculate the outputs of digital circuits by using probabilistic inputs and outputs that
3. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 24
describe the likelihood of each signal being a logic one [17], [18]. This stochastic model takes
Boolean functions of noisy digital circuits and replaces them with equivalent probability
functions [19], [20]. Figure 1 presents a model of digital logic gates with noise applied to the
inputs. Note that computing with a probabilistic model is also called stochastic computing
18
;
the two terms will be used interchangeably throughout this paper. It should also be noted that
when the signal to noise ratio of a digital circuit is kept below a certain threshold, the circuit
can still perform normally. Obviously creating a noise free environment for all digital circuits to
operate within is not possible so reliable digital circuits are still a challenge to produce.
FIGURE 1:(a) Noisy model of a logic gate (b) Probabilistic/stochastic computing model
where p(x) is the probability of that signal being a 1.
Sources of noise in digital circuits include thermodynamic fluctuations, electromagnetic
interference, radiation, and parameter fluctuations. The noise can affect the timing, causing a
delay failure, increase power consumption, and cause function failure because of signal
deviation. The probabilistic behavior of digital circuits in a noisy environment is studied using
the stochastic computing model where the input and output signals are described as
probabilities of being logic 1 or logic 0.
Let the inputs of the AND gate be mutually independent with probabilities
represented by and . The output probability can be evaluated
using the probability of both events and , i.e., . Now consider the OR gate with
inputs and outputs labeled the same as our AND gate. The output probability of this OR gate
is . The output probability of a NOT gate is . Note that if
, the inputs become deterministic and the output for the AND, OR, and NOT gate
become 1, 1, and 0, respectively, as expected. Also, note that with p’s replaced by x’s, i.e.,
AND: ; OR: ; and NOT: , the three output probability
expressions become their respective arithmetic expressions. Similarly, the output probability
of the XOR gate is and the arithmetic
expression of the output of the XOR gate is . The output probability of
NAND, NOR, and XNOR are 1 minus that of AND, OR, and XOR, respectively. The above
equations are listed in Table 1.
TABLE 1: Probabilistic Behavior of Digital Gates
Consider the 1-bit binary half adder circuit shown in Figure 2. The logic function for the sum S
is and the logic function for the carry out C is . The arithmetic expressions
(transformations) of these two functions are and where
4. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 25
and . For example, when then and
are found to be and . This means that the output S has a 0.82 chance of
being a 0 and the output probability of C dictates that it has a 0.81 chance of being a 1. In
other words, each output has a fairly high chance (nearly 20%) of delivering an incorrect
answer when noise is accommodated.
FIGURE 2: 1-bit binary half adder circuit
3. PROPOSED NEW METHOD
The equations given in this section can be used to calculate the required length of the RENC
to meet reliability specifications of a noisy digital circuit. Furthermore, RENC can be applied to
any number of outputs with either the same or varying reliability requirements at each output.
FIGURE 3: Multi-output digital circuit reliability improvements where the value n signifies the number of
RENs used to construct the RENC.
This approach is a new way of looking at the reliability enhancement of digital circuits
because it is focused on correcting the effect of the high signal to noise ratio in digital circuits.
Digital circuits of any kind could be much more commercially viable if this high signal to noise
ratio problem is remedied with our novel approach. A class of REN digital circuits, which are
the building blocks of RENC, is presented in the following section.
3.1 A Class of REN Digital Circuits
A new approach for designing noise-tolerant digital circuits is proposed. For this approach to
be viable we show that some real circuits exist that satisfy all the requirements of an REN.
Our first example of an REN is illustrated in Figure 4. This REN, labeled REN1, has a
probability function that can be derived by using stochastic model introduced in Section 2.
FIGURE 4: Reliability Enhancement Network 1 (REN1)
FIGURE 5: Reliability Enhancement Network 2 (REN2)
5. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 26
REN1s probability function is derived using Table 1 and can be expressed as
, where and . Figure 6a gives the plot of
REN1s probability function to visually verify the two required properties from Definition 1.
Examining the probability function of REN1 closely reveals that the output will be lower than
the input when the input to REN1 is lower than . Likewise, the output will be higher than the
input when the input to REN1 is higher than . For example, and
meaning that REN1 has the ability to improve circuit output reliability with the
stochastic threshold being . There is one other REN shown in Figure 6b. The
plot, probability function and pivot point are shown in Figure 6b and Table 2. It is shown that
they are all RENs and can be used to construct RENC for enhancing reliability of digital
circuits.
FIGURE 6: Plot of the probability function of REN1 and REN2
REN NameFunctionPivot Point
REN1 F(X) = (2X-X2
)2
Xp 0.381966
REN2 F(X) = 2X2
-X4
Xp 0.618030
TABLE 2: Probability functions and Xp values for presented RENs
Given just these two RENs we now have the capability to construct some useful RENC for the
improvement of output reliability of digital circuits. This will be further explained in the next
section.
3.2 Reliability Enhancement Network Chains
The reliability enhancement capabilities of a single REN are not generally enough to make a
noisy digital circuit reliable. In this case, RENCs are needed and can be constructed by
cascading multiple RENs at an output.
FIGURE 7: Noise-tolerant configuration using RENC
Consider a noisy single output digital circuit whose output is connected to a 2-cell RENC,
illustrated in Figure 7. Let us consider the following four cases: (1) both cells are REN1 of
Figure 4, (2) both cells are REN2 of Figure 5, (3) a REN1 circuit followed by a REN2 circuit
and (4) a REN2 circuit followed by a REN1 circuit. The first two RENCs will be referred to as
homogenous RENC or HRENC and the last two RENCs will be referred to as non-
homogenous RENC or NHRENC.
6. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 27
Definition 1
Reliability Enhancement Ranges for logic 0 (RER(0)) and logic 1 (RER(1)) are the ranges
where the stochastic logic 0 and logic 1 can be enhanced by REN or RENC.
Definition 2
Reliability Enhancement Prohibited Range (REPR) is defined as the range of where the
reliability improvement of an RENC is unpredictable. This range lies between the s of
different RENs in an RENC. This happens when the RENs used to create an RENC are not of
the same type.
Definition 3
The Total Reliability Improvement Percentage (TRIP) is the percentage of values where
reliability of a circuit can be enhanced though a REN or RENC. That is, TRIP=1-REPR.
To compare their various reliability enhancement capabilities, we arbitrarily choose
as an example. The of the four 2-cell
RENCs are tabulated in Table 1.
TABLE 3: Tabulations of different HRENC and NHRENC configurations
From in Table 3, we see that HRENC and NHRENC all can deliver desirable reliability.
However, from the computational point of view it is much easier to determine the number of
stages needed for a HRENC to meet a required level of reliability. A design example of a
noise tolerant digital circuit using RENC through simulation is presented in the next section.
4. RELIABILITY ENHANCEMENT NETWORK CHAIN SIMULATION
In this section we present simulations of the design examples given in the previous section
created in MATLAB. The circuit that we simulated is a half adder. This simulation is a two-
variable plot with half adder inputs and used to derive the output
through the probability function of which is illustrated in Figure 8a. This plot shows the
output S without any special improvement. This digital circuit clearly needs reliability
improvement; this can be seen in how most points on this graph are not near logic 1 or logic
0.
The second graph (Figure 8b) shows what the final output S looks like after just one REN1 is
attached. The improvement is a bit more clear after 2 REN1 stages because there is much
more area on the graph that is very close to either logic 1 or logic 0 as shown in Figure 8c.
Improvement is very obvious, as shown in Figure 8d, when we simulate five REN1 stages
after the output S because the amount of graph area very close to logic 1 or logic 0 is greatly
increased and the transition between the two is much sharper. This illustrates how the output
of a stochastic circuit can be made close to deterministic when a well-designed RENC is
employed.
7. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 28
FIGURE 8: Improvement of S with RENC
5. COMPARATIVE EVALUATION OF THE RESULTS
5.1 Comparative Example
Consider a comparative example using the 1-bit binary half adder (HA). Referring to figure 2,
the probability functions of the 1-bit HA are and . Assume
that the probabilistic inputs to A and B are . Then and
. The probability of a logic 1 for the S output can be converted to reliability by
subtracting it from 1 resulting in . Now assume that the design required reliability
for both and needs to be at least 0.99999.
RENC Method:
Suppose that REC1 is to be used, whose probability function is . The
reliability of the S and outputs without improvement are and .
These reliabilities are increased to and with the
addition of one REC1 at each output. The and outputs do not meet the reliability
requirements by attaching one REC1 so we must add one more REC1 to each RENC. This
increases the reliability of each output to and which
are greater than the required reliability.This means that means we need at least an RENC of
two REC1s at each output of the HA. The reliabilities of and are tabulated in table 4.
Number of REC1s 0 1 2
0.9802 0.9984627359 0.9999905618
0.9801 0.9992081368 0.9999987459
TABLE 4: Tabulations of RENC for 1-bit HA
R-Fold Modular Redundancy (RFMR):
To meet the same design reliability requirements of and being at least 0.99999 using
RFMR the output reliabilities and are tabulated for R=3, 5 and 7using the binomial
formula: . The reliability of 3MR is . We find that
the reliability for the output is when and the reliability
for the output is when . These reliabilities are
not as high as the design specifies so we will move on to R=5 (5MR). Using the binomial
theorem again we find the reliability function of 5MR is . We find
that the reliability for the output is and the reliability for
becomes . These output reliabilities are still not as high as the
design specifies so we must try R=7 (7MR). Repeating the same process we find that the
reliability function of 7MR is . Now the reliabilities of the
HA outputs become and which both
meet the reliability requirements of the design. A tabulation of the results is given in table 5.
8. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 29
R 3 5 7
TABLE 5: Tabulations of R-Fold Modular Redundancy on the 1 bit HA
5.2 Evaluation of the Results
Now that we have shown that both the RENC using two REC1s method and the 7MR can
both meet our reliability requirements. The RENC method only requires one 1-bit HA and four
REC1s. Since we know that each REC1 requires three logic gates and If we assume the HA
requires two logic gates then the total gates required for this method is 14.
The 7MR circuit requires seven 1-bit HA and two 7-input Majority Gates.One way to estimate
the number of gates to construct the 7-input Majority Gate is as follows. The 7-input Majority
Gate requires a combinational circuit which is composed of 35 minterms. Each minterm has
four variables, which means each of the 35 minterms uses three AND gates for a total of 105
AND gates. Additionally, 35 minterms requires 34 OR gates to create the sum of products for
the 7-input majority gate for a total of 139 logic gates. The total number of gates required for
this design is then calculated as .Table 6 has a tabulation of these
results. Hence, for this example is it seen that the required number of gates for the RENC
method compared to the RFMR method is approximately 1:20.
Reliability Enhancement Type Logic Gate Count
RENC 14
R-Fold Modular Redundancy 292
TABLE 6: Tabulation of Logic Gate Count
6. CONCLUSION
This paper presents a new approach to the design of noise-tolerant digital circuits that utilizes
Reliability Enhancement Network Chains. Our approach to the reliability problem of digital
circuits is novel because it hinders environmental noise at the output of a computational unit
rather than using redundant computational units like many traditional methods. This fact is
very important because employing massive redundancy of computational units to create
reliable digital circuits can negate the benefit of the high chip density allowed by modern
digital technologies. In addition to providing higher chip density, our approach delivers high
reliability in the face of random environmental signal noise by adding a small number of
additional gates to the design. Furthermore, it has been shown that the reliability of digital
circuits approaches deterministic levels as the number of REN stages of the RENC is
sufficiently large.
7. REFERENCES
[1] J. von Neumann,”Probabilistic logics and the synthesis of reliable organisms from
unreliable components,” in Automata Studies, C.E. Shannon and J. McCarthy, Eds.
Princeton, NJ:Princeton University Press, 1956, pp. 43-98.
[2] W. Evans and N. Pippenger.“On the maximum tolerable noise for reliable computation
by formulas” IEEE Trans. Inform. Theory, vol. 44, pp. 1299-1305, 1998.
[3] R. L. Dobrushin and S. I. Ortyukov.“Upper bound on the redundancy of self-correcting
arrangements of unreliable functional elements” Prob. Inform. Trans., vol. 13, pp. 203-
218, 1977.
[4] R. L. Dobrushin and S. I. Ortyukov.“Lower bound on the redundancy of self-correcting
arrangements of unreliable functional elements” Prob. Inform. Trans., vol. 13, pp. 59-
65, 1977.
[5] N. Pippenger. “On networks of noisy gates” in Proc. 26
th
Annu. Symp. Foundationas of
Computer Science, pp. 30-38, 1985.
9. Jeremy M. Lakes & Samuel C. Lee
International Journal of Logic and Computation (IJLP), Volume (2) : Issue (2) : 2011 30
[6] N. Pippenger. “Invariance of complexity measures for networks with unreliable gates” J.
ACM, vol 36, pp. 194-197, 1988.
[7] N. Pippenger. “Reliable computation by formulas in the presence of noise,” IEEE Trans.
Inform. Theory, vol. 34, pp. 194-197, 1988.
[8] G. Norman, D. Parker, M. Kwiatkowska and S. Shukla.”Evaluating reliability of defect
tolerant architecture for nanotechnology using probabilistic model checking” in Proc.
IEEE VLSI Design Conference (IEEE Press, 2004), to appear.
[9] D. Bhaduri and S. K. Shukla. "Reliability evaluation of von Neumann multiplexing based
defect-tolerant majority circuits" Nanotechnology, 2004. 4th IEEE Conference on , vol.,
no., pp. 599-601, 16-19 Aug. 2004.
[10] Jie Han and Pieter Jonker.“A System Architecture Solution for Unreliable
Nanoelectronic Devices” IEEE Trans. on Nanotechnology, vol. 1, no. 4, pp. 201-208,
December 2002.
[11] K. Nikolic, A. Sadek and M. Forshaw.”Architectures for Reliable Computing with
Unreliable Nanpdevices” in Proc. IEEE-NANO ’01 (IEEE 2001), pp. 254-259.
[12] S. Ahuja, G. Singh, D. Bhaduri, and S. Shukla.”Fault- and Defect-Tolerant
Architectures for Nanocomputing” in Bio-inspired and Nanoscale: Integrated
Computing, M. Wilner, Ed. New Jersey: Wiley, 2009, pp. 262-293.
[13] X. Lu and J. Li.”Probabilistic Modeling of Nanoscale XOR Gate” in International
Conference on Apperceiving Computing and Intelligence Analysis, 2008, pp. 4-7.
[14] X. Lu, J. Li, G. Yang, and X. Song.”Probabilistic Modeling of Nanoscale Adder” in IEEE
International Conference on Integrated Circuit Design and Technology and Tutorial,
2008.
[15] S. C. Lee and L. R. Hook."Toward the design of a nanocomputer" in Proc. IEEE 2006
Canadian Conference on Electrical and Computer Engineering, May 2006.
[16] S. C. Lee and L. R. Hook. "Logic and Computer Design in Nanospace" IEEE
Transactions on Computers, vol. 57, no. 7, pp. 965-977, 2008.
[17] T. Rejimon, K. Lingasurbramanian, and S. Bhanja.“Probabilistic Error Modeling for
Nano-Domain Logic Circuits” IEEE Trans. On VLSI, vol. 17, no. 1, pp. 55-65, January
2009.
[18] W.R. Fahrner, Ed., Nanotechnology and Nanoelectronics: Materials, Devices,
Measurement Techniques. Berlin: Springer, 2005, pp. 18-38.
[19] S. Yanushkevich, V. Shmerko, and S. Lyshevski.Logic Design of NanoICs. Boca Raton:
CRC Press, 2005.
[20] S. Yanushkevich, V. Shmerko, and S. Lyshevski.Computer Arithmetics for
Nanoelectronics. Boca Raton: CRC Press, 2009.