This document summarizes an article from the International Journal of Electronics and Communication Engineering & Technology. The article describes modeling and simulating test data compression using Verilog. It discusses using efficient bitmask selection and dictionary selection techniques to significantly reduce testing time and memory requirements for system-on-chip designs. The techniques aim to generate maximum matching patterns for test data compression by developing a bitmask selection method and efficient dictionary selection method that considers bitmasks.
This document summarizes a research paper on adding error resilience to test data compression techniques for IP core-based system-on-chips (SoCs). It discusses how bit flips in compressed test data can reduce fault coverage and yield. The document proposes adding parity bits to compressed test vectors to detect and correct errors, similar to parity checking for data transmission. This allows erroneous test data to be retransmitted without loss of synchronization or fault coverage. The method aims to improve on existing approaches that only avoid error propagation between test vectors.
A Novel Method for Encoding Data Firmness in VLSI CircuitsEditor IJCATR
The number of tests, corresponding test data volume and test time increase with each new fabrication process technology.
Higher circuit densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger test data size demands
not only higher memory requirements, but also an increase in testing power and time. Test data compression method can be used to
solve this problem by reducing the test data volume without affecting the overall system performance. The original test data is
compressed and stored in the memory. Thus, the memory size is significantly reduced. The proposed approach combines the selective
encoding method and dictionary based encoding method that reduces test data volume and test application time for testing. The
experiment is done on combinational benchmark circuit that designed using Tanner tool and the encoding algorithm is implemented
using Model -Sim
Steganography System for Hiding Text and Images Using Improved LSB MethodIRJET Journal
This document presents a proposed improved LSB steganography system for hiding text and images. The system uses the least significant bit and pseudo-random encoding techniques to embed secret data into cover images. The proposed system was able to achieve a peak signal-to-noise ratio of 49.32 and 94% accuracy when tested on different image sets, outperforming existing LSB substitution methods. The document describes the embedding and extracting algorithms of the proposed system and compares its performance to other steganography techniques based on imperceptibility and robustness.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
Mining knowledge graphs to map heterogeneous relations between the internet o...IJECEIAES
Patterns for the internet of things (IoT) which represent proven solutions used to solve design problems in the IoT are numerous. Similar to objectoriented design patterns, these IoT patterns contain multiple mutual heterogeneous relationships. However, these pattern relationships are hidden and virtually unidentified in most documents. In this paper, we use machine learning techniques to automatically mine knowledge graphs to map these relationships between several IoT patterns. The end result is a semantic knowledge graph database which outlines patterns as vertices and their relations as edges. We have identified four main relationships between the IoT patterns-a pattern is similar to another pattern if it addresses the same use case problem, a large-scale pattern uses a small- scale pattern in a lower level layer, a large pattern is composed of multiple smaller scale patterns underneath it, and patterns complement and combine with each other to resolve a given use case problem. Our results show some promising prospects towards the use of machine learning techniques to generate an automated repository to organise the IoT patterns, which are usually extracted at various levels of abstraction and granularity.
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
This document summarizes a research paper on adding error resilience to test data compression techniques for IP core-based system-on-chips (SoCs). It discusses how bit flips in compressed test data can reduce fault coverage and yield. The document proposes adding parity bits to compressed test vectors to detect and correct errors, similar to parity checking for data transmission. This allows erroneous test data to be retransmitted without loss of synchronization or fault coverage. The method aims to improve on existing approaches that only avoid error propagation between test vectors.
A Novel Method for Encoding Data Firmness in VLSI CircuitsEditor IJCATR
The number of tests, corresponding test data volume and test time increase with each new fabrication process technology.
Higher circuit densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger test data size demands
not only higher memory requirements, but also an increase in testing power and time. Test data compression method can be used to
solve this problem by reducing the test data volume without affecting the overall system performance. The original test data is
compressed and stored in the memory. Thus, the memory size is significantly reduced. The proposed approach combines the selective
encoding method and dictionary based encoding method that reduces test data volume and test application time for testing. The
experiment is done on combinational benchmark circuit that designed using Tanner tool and the encoding algorithm is implemented
using Model -Sim
Steganography System for Hiding Text and Images Using Improved LSB MethodIRJET Journal
This document presents a proposed improved LSB steganography system for hiding text and images. The system uses the least significant bit and pseudo-random encoding techniques to embed secret data into cover images. The proposed system was able to achieve a peak signal-to-noise ratio of 49.32 and 94% accuracy when tested on different image sets, outperforming existing LSB substitution methods. The document describes the embedding and extracting algorithms of the proposed system and compares its performance to other steganography techniques based on imperceptibility and robustness.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
Mining knowledge graphs to map heterogeneous relations between the internet o...IJECEIAES
Patterns for the internet of things (IoT) which represent proven solutions used to solve design problems in the IoT are numerous. Similar to objectoriented design patterns, these IoT patterns contain multiple mutual heterogeneous relationships. However, these pattern relationships are hidden and virtually unidentified in most documents. In this paper, we use machine learning techniques to automatically mine knowledge graphs to map these relationships between several IoT patterns. The end result is a semantic knowledge graph database which outlines patterns as vertices and their relations as edges. We have identified four main relationships between the IoT patterns-a pattern is similar to another pattern if it addresses the same use case problem, a large-scale pattern uses a small- scale pattern in a lower level layer, a large pattern is composed of multiple smaller scale patterns underneath it, and patterns complement and combine with each other to resolve a given use case problem. Our results show some promising prospects towards the use of machine learning techniques to generate an automated repository to organise the IoT patterns, which are usually extracted at various levels of abstraction and granularity.
This document summarizes research on transaction reordering techniques. It discusses transaction reordering approaches based on reducing resource conflicts and increasing resource sharing. Specifically, it covers:
1) A "steal-on-abort" technique that reorders an aborted transaction behind the transaction that caused the abort to avoid repeated conflicts.
2) A replication protocol that attempts to reorder transactions during certification to avoid aborts rather than restarting immediately.
3) Transaction reordering and grouping during continuous data loading to prevent deadlocks when loading data for materialized join views.
Framework for reversible data hiding using cost-effective encoding system for...IJECEIAES
Importance’s of reversible data hiding practices are always higher in contrast to any conventional data hiding schemes owing to its capability to generate distortion free cover media. Review of existing approaches on reversible data hiding approaches shows variable scheme mainly focusing on the embedding mechanism; however, such schemes could be furthermore improved using encoding scheme for optimal embedding performance. Therefore, the proposed manuscript discusses about a cost-effective scheme where a novel encoding scheme has been used with larger block sizes which reduces the dependencies over larger number of blocks. Further a gradient- based image registration technique is applied to ensure higher quality of the reconstructed signal over the decoding end. The study outcome shows that proposed data hiding technique is proven better than existing data hiding scheme with good balance between security and restored signal quality upon extraction of data.
IRJET- High Capacity Reversible Data Hiding in Encrypted Images by MSB Predic...IRJET Journal
This document presents two high capacity reversible data hiding methods for encrypted images called CPE-MHCRDH and EPE-MHCRDH. The CPE-MHCRDH method corrects prediction errors in the most significant bits of pixels before encryption, allowing two bits of secret data to be embedded per pixel. The EPE-MHCRDH method directly encrypts the original image and embeds the locations of prediction errors after encryption. Both methods compress the data-hidden encrypted image losslessly using LZ77 compression. Experimental results show the proposed methods achieve better reconstructed image quality and higher embedding capacity than previous reversible data hiding in encrypted image methods.
Hybrid Approach for Improving Data Security and Size Reduction in Image Stega...IRJET Journal
This document proposes a new hybrid approach for improving data security and reducing the size of hidden data in image steganography. It uses three techniques: 1) Huffman encoding is applied to compress the text message, 2) DNA encryption is applied to the compressed data, and 3) a state transition algorithm is used to update the pixel locations in the cover image where bits will be hidden. The implementation and evaluation of the proposed technique shows that it provides higher security than traditional techniques like LSB, LF-DCT, and MF-DCT substitution. It is also more efficient and secure while maintaining good image quality as measured by PSNR and MSE metrics.
IRJET-Artificial Neural Networks to Determine Source of Acoustic Emission and...IRJET Journal
This document summarizes research using neural networks to analyze acoustic emission data collected from sensors on a concrete wall during controlled cracking tests. The data is divided into 3 types: Type I from cracking tests, Type II from pulsing a sensor, and Type III from ambient noise. A neural network is trained on Type I data to learn correlations between sensor recordings. It is then tested on the other data types and shown to distinguish Type I from the others, indicating it can identify cracking events. While preliminary results are promising, more research is needed to validate the technique for acoustic emission source identification.
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
The document summarizes an adaptive image steganography technique that embeds secret messages into digital images. It proposes using adaptive quantization embedding, where quantization steps for image blocks are optimized to guarantee more data can be embedded in busy image areas with high contrast. The technique embeds adaptive quantization parameters and message bits into the cover image using a difference expanding algorithm. Simulation results showed the proposed scheme can provide a good balance between imperceptibility and embedding capacity.
Deep Convolutional Neural Network based Intrusion Detection SystemSri Ram
In the present era, the cyberspace is growing tremendously and Intrusion detection system (IDS) plays an key role in it to ensure the information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network based IDS is to differentiate between normal and malicious traffic data and raise alert in case of an attack. Apart from the traditional signature and anomaly based approaches, many researchers have employed various Deep Learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of Deep Convolutional Neural Network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
This document proposes an improved steganography approach using color-guided channels in digital images. It begins with an introduction to steganography and discusses how it can be used to hide secret data or messages within cover objects like images, video, or audio files. The proposed method embeds data into a color image's RGB channels. It first converts the secret message to a binary bit stream and compresses it using run length encoding. The data is then embedded directly into the LSBs of some channels and indirectly into other channels by encoding counts. This approach aims to improve the visual quality of the stego image and have higher embedding capacity compared to existing methods.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
This document summarizes a research paper that proposes a solution for detecting Distributed Denial of Service (DDoS) attacks in cloud computing environments. The solution involves deploying Intrusion Detection Systems (IDSs) in each virtual machine to detect attacks. When attacks are detected, alerts are stored in a database in the Cloud Fusion Unit on the front-end server. The Cloud Fusion Unit then analyzes the alerts from the multiple IDSs using Dempster-Shafer theory and fault tree analysis. It combines the evidence from the different IDSs using Dempster's combination rule to fuse the data and reduce false alarms. The goal is to more accurately detect DDoS attacks targeting cloud services by correl
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
Reversible Image Data Hiding with Contrast EnhancementIRJET Journal
This document proposes a reversible image data hiding technique with contrast enhancement. It aims to embed data into a cover image in a reversible manner while also enhancing the contrast of the cover image. The technique first calculates prediction errors of pixel values in the cover image. It then generates a histogram of the prediction errors and selects carriers for data embedding from peaks in the histogram. Binary secret data is embedded into the carriers by dynamically shifting the prediction error histogram. This allows data to be embedded while increasing cover image quality compared to other reversible data hiding methods. The original cover image can be recovered by extracting the embedded data and reversing the histogram shifts. The technique is meant to achieve a higher peak signal-to-noise ratio than the original cover image after data
IRJET- Emotion Recognition using Facial ExpressionIRJET Journal
This document discusses emotion recognition using facial expressions. It begins with an introduction to how facial expressions are an important mode of human communication and expression of emotions. It then discusses previous work in recognizing basic emotions like happiness, sadness, anger, etc. from facial features. The proposed system uses the Inception V3 model with transfer learning and the CK+ dataset to classify images into seven emotion categories. The results found high accuracy in classifying emotions from facial images.
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
This document discusses mapping networks onto a hybrid Network-on-Chip (NoC) architecture that integrates packet switching, circuit switching, and virtual circuit switching. It reviews prior work on mapping applications onto NoC architectures to optimize performance, energy consumption, and latency. The paper proposes a hybrid scheme to map cores and communications onto different switching mechanisms in the NoC to balance latency, flexibility, and efficiency.
Reversible Data Hiding In Encrypted Images And Its Application To Secure Miss...CSCJournals
This paper proposes reversible data hiding in encrypted images for secure missile launching. The work is presented in two stages: one involves encryption of cover image by block cipher algorithm and other is embedding secure data related to missile launching. For embedding data, vacant pixels are identified by Slepian-Wolf encoding method along with embedding key to hide the data. At the other end by using decryption algorithm the original cover image is recovered and the secret data is extracted. The performance analysis is presented by calculating parameters MSE, PSNR and SSIM.
Integrating Fusion levels for Biometric Authentication SystemIOSRJECE
— Recently a lot of works are presented in the literature for the multimodal biometric authentication. And such biometric systems have been widely accepted with increasing accuracy rates and population coverage, reducing the vulnerability to spoofing. This paper descripts about the proposed multimodal biometric system that combines the feature extraction level and the score level fusion of iris and face unimodal biometric systems in order to take advantage of both fusion techniques. The experimental results shows the performance of the multimodal and multilevel fusion techniques which are analysed using TRR and TAR to study the recognition behaviour of the approach system. From the ROC Curve plotted, the performance of the proposed system is better compared to the individual fusion techniques.
This document summarizes various image compression techniques including lossy and lossless methods. For lossy compression, it discusses transformation coding techniques like discrete cosine transform (DCT) and discrete wavelet transform (DWT). It also covers vector quantization, fractal coding, block truncation coding, and subband coding. For lossless techniques, it describes run length encoding, Huffman coding, LZW coding, and area coding. Finally, it provides an overview of using neural networks for image compression, including backpropagation networks, hierarchical/adaptive networks, and multilayer perceptron networks.
This document summarizes an article from the International Journal of Civil Engineering and Technology that examines optimizing cropping patterns in an irrigation project area. It begins with background on the journal and outlines the objectives of determining optimal cropping patterns given land and water availability. It then reviews previous related studies and describes the study area which is the Mayurakshi Command Area in India. Key concepts are introduced, such as dividing the area into blocks based on agronomic, economic, climatic and rainfall factors. The problem is formulated with the objective of maximizing net return defined as revenue from crop sales minus expenses. Equations are provided to calculate net return per unit area for each crop, soil class, and block based on yield, prices
This document summarizes a research paper that presents a novel U-slot loaded circular microstrip antenna design for triple band operation. Key points:
- The antenna is designed to operate between 5.57-8.34 GHz with a maximum bandwidth of 10.34% and peak gain of 1.85 dB.
- It uses a low-cost glass epoxy substrate and a simple fabrication process.
- Experimental results show the antenna achieves triple band operation through the addition of a U-shaped slot on the circular patch and an H-shaped slot on the ground plane.
- Radiation patterns were broadside and linearly polarized across the operating bands.
- The simple design makes this
Framework for reversible data hiding using cost-effective encoding system for...IJECEIAES
Importance’s of reversible data hiding practices are always higher in contrast to any conventional data hiding schemes owing to its capability to generate distortion free cover media. Review of existing approaches on reversible data hiding approaches shows variable scheme mainly focusing on the embedding mechanism; however, such schemes could be furthermore improved using encoding scheme for optimal embedding performance. Therefore, the proposed manuscript discusses about a cost-effective scheme where a novel encoding scheme has been used with larger block sizes which reduces the dependencies over larger number of blocks. Further a gradient- based image registration technique is applied to ensure higher quality of the reconstructed signal over the decoding end. The study outcome shows that proposed data hiding technique is proven better than existing data hiding scheme with good balance between security and restored signal quality upon extraction of data.
IRJET- High Capacity Reversible Data Hiding in Encrypted Images by MSB Predic...IRJET Journal
This document presents two high capacity reversible data hiding methods for encrypted images called CPE-MHCRDH and EPE-MHCRDH. The CPE-MHCRDH method corrects prediction errors in the most significant bits of pixels before encryption, allowing two bits of secret data to be embedded per pixel. The EPE-MHCRDH method directly encrypts the original image and embeds the locations of prediction errors after encryption. Both methods compress the data-hidden encrypted image losslessly using LZ77 compression. Experimental results show the proposed methods achieve better reconstructed image quality and higher embedding capacity than previous reversible data hiding in encrypted image methods.
Hybrid Approach for Improving Data Security and Size Reduction in Image Stega...IRJET Journal
This document proposes a new hybrid approach for improving data security and reducing the size of hidden data in image steganography. It uses three techniques: 1) Huffman encoding is applied to compress the text message, 2) DNA encryption is applied to the compressed data, and 3) a state transition algorithm is used to update the pixel locations in the cover image where bits will be hidden. The implementation and evaluation of the proposed technique shows that it provides higher security than traditional techniques like LSB, LF-DCT, and MF-DCT substitution. It is also more efficient and secure while maintaining good image quality as measured by PSNR and MSE metrics.
IRJET-Artificial Neural Networks to Determine Source of Acoustic Emission and...IRJET Journal
This document summarizes research using neural networks to analyze acoustic emission data collected from sensors on a concrete wall during controlled cracking tests. The data is divided into 3 types: Type I from cracking tests, Type II from pulsing a sensor, and Type III from ambient noise. A neural network is trained on Type I data to learn correlations between sensor recordings. It is then tested on the other data types and shown to distinguish Type I from the others, indicating it can identify cracking events. While preliminary results are promising, more research is needed to validate the technique for acoustic emission source identification.
Integration of feature sets with machine learning techniquesiaemedu
This document summarizes a research paper that proposes a novel approach for spam filtering using selective feature sets combined with machine learning techniques. The paper presents an algorithm and system architecture that extracts feature sets from emails and uses machine learning to classify emails and generate rules to identify spam. Several metrics are identified to evaluate the efficiency of the feature sets, including false positive rate. An experiment is described that uses keyword lists as feature sets to train filters and compares the proposed approach to other spam filtering methods.
Adaptive job scheduling with load balancing for workflow applicationiaemedu
This document discusses adaptive job scheduling with load balancing for workflow applications in a grid platform. It begins with an abstract that describes grid computing and how scheduling plays a key role in performance for grid workflow applications. Both static and dynamic scheduling strategies are discussed, but they require high scheduling costs and may not produce good schedules. The paper then proposes a novel semi-dynamic algorithm that allows the schedule to adapt to changes in the dynamic grid environment through both static and dynamic scheduling. Load balancing is incorporated to handle situations where jobs are delayed due to resource fluctuations or overloading of processors. The rest of the paper outlines the related works, proposed scheduling algorithm, system model, and evaluation of the approach.
The document summarizes an adaptive image steganography technique that embeds secret messages into digital images. It proposes using adaptive quantization embedding, where quantization steps for image blocks are optimized to guarantee more data can be embedded in busy image areas with high contrast. The technique embeds adaptive quantization parameters and message bits into the cover image using a difference expanding algorithm. Simulation results showed the proposed scheme can provide a good balance between imperceptibility and embedding capacity.
Deep Convolutional Neural Network based Intrusion Detection SystemSri Ram
In the present era, the cyberspace is growing tremendously and Intrusion detection system (IDS) plays an key role in it to ensure the information security. The IDS, which works in network and host level, should be capable of identifying various malicious attacks. The job of network based IDS is to differentiate between normal and malicious traffic data and raise alert in case of an attack. Apart from the traditional signature and anomaly based approaches, many researchers have employed various Deep Learning (DL) techniques for detecting intrusion as DL models are capable of extracting salient features automatically from the input data. The application of Deep Convolutional Neural Network (DCNN), which is utilized quite often for solving research problems in image processing and vision fields, is not explored much for IDS. In this paper, a DCNN architecture for IDS which is trained on KDDCUP 99 data set is proposed. This work also shows that the DCNN-IDS model performs superior when compared with other existing works.
Security and imperceptibility improving of image steganography using pixel al...IJECEIAES
Information security is one of the main aspects of processes and methodologies in the technical age of information and communication. The security of information should be a key priority in the secret exchange of information between two parties. In order to ensure the security of information, there are some strategies that are used, and they include steganography and cryptography. An effective digital image-steganographic method based on odd/even pixel allocation and random function to increase the security and imperceptibility has been improved. This lately developed outline has been verified for increasing the security and imperceptibility to determine the existent problems. Huffman coding has been used to modify secret data prior embedding stage; this modified equivalent secret data that prevent the secret data from attackers to increase the secret data capacities. The main objective of our scheme is to boost the peak-signal-to-noise-ratio (PSNR) of the stego cover and stop against any attack. The size of the secret data also increases. The results confirm good PSNR values in addition of these findings confirmed the proposed method eligibility.
This document proposes an improved steganography approach using color-guided channels in digital images. It begins with an introduction to steganography and discusses how it can be used to hide secret data or messages within cover objects like images, video, or audio files. The proposed method embeds data into a color image's RGB channels. It first converts the secret message to a binary bit stream and compresses it using run length encoding. The data is then embedded directly into the LSBs of some channels and indirectly into other channels by encoding counts. This approach aims to improve the visual quality of the stego image and have higher embedding capacity compared to existing methods.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
This document summarizes a research paper that proposes a solution for detecting Distributed Denial of Service (DDoS) attacks in cloud computing environments. The solution involves deploying Intrusion Detection Systems (IDSs) in each virtual machine to detect attacks. When attacks are detected, alerts are stored in a database in the Cloud Fusion Unit on the front-end server. The Cloud Fusion Unit then analyzes the alerts from the multiple IDSs using Dempster-Shafer theory and fault tree analysis. It combines the evidence from the different IDSs using Dempster's combination rule to fuse the data and reduce false alarms. The goal is to more accurately detect DDoS attacks targeting cloud services by correl
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
Reversible Image Data Hiding with Contrast EnhancementIRJET Journal
This document proposes a reversible image data hiding technique with contrast enhancement. It aims to embed data into a cover image in a reversible manner while also enhancing the contrast of the cover image. The technique first calculates prediction errors of pixel values in the cover image. It then generates a histogram of the prediction errors and selects carriers for data embedding from peaks in the histogram. Binary secret data is embedded into the carriers by dynamically shifting the prediction error histogram. This allows data to be embedded while increasing cover image quality compared to other reversible data hiding methods. The original cover image can be recovered by extracting the embedded data and reversing the histogram shifts. The technique is meant to achieve a higher peak signal-to-noise ratio than the original cover image after data
IRJET- Emotion Recognition using Facial ExpressionIRJET Journal
This document discusses emotion recognition using facial expressions. It begins with an introduction to how facial expressions are an important mode of human communication and expression of emotions. It then discusses previous work in recognizing basic emotions like happiness, sadness, anger, etc. from facial features. The proposed system uses the Inception V3 model with transfer learning and the CK+ dataset to classify images into seven emotion categories. The results found high accuracy in classifying emotions from facial images.
New Design Architecture of Chaotic Secure Communication System Combined with ...ijtsrd
In this paper, the exponential synchronization of secure communication system is introduced and a novel secure communication design combined with linear receiver is constructed to ensure the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are offered to demonstrate the correctness and feasibility of the obtained results. Yeong-Jeu Sun "New Design Architecture of Chaotic Secure Communication System Combined with Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-1 , December 2020, URL: https://www.ijtsrd.com/papers/ijtsrd38214.pdf Paper URL : https://www.ijtsrd.com/engineering/electrical-engineering/38214/new-design-architecture-of-chaotic-secure-communication-system-combined-with-linear-receiver/yeongjeu-sun
This document discusses mapping networks onto a hybrid Network-on-Chip (NoC) architecture that integrates packet switching, circuit switching, and virtual circuit switching. It reviews prior work on mapping applications onto NoC architectures to optimize performance, energy consumption, and latency. The paper proposes a hybrid scheme to map cores and communications onto different switching mechanisms in the NoC to balance latency, flexibility, and efficiency.
Reversible Data Hiding In Encrypted Images And Its Application To Secure Miss...CSCJournals
This paper proposes reversible data hiding in encrypted images for secure missile launching. The work is presented in two stages: one involves encryption of cover image by block cipher algorithm and other is embedding secure data related to missile launching. For embedding data, vacant pixels are identified by Slepian-Wolf encoding method along with embedding key to hide the data. At the other end by using decryption algorithm the original cover image is recovered and the secret data is extracted. The performance analysis is presented by calculating parameters MSE, PSNR and SSIM.
Integrating Fusion levels for Biometric Authentication SystemIOSRJECE
— Recently a lot of works are presented in the literature for the multimodal biometric authentication. And such biometric systems have been widely accepted with increasing accuracy rates and population coverage, reducing the vulnerability to spoofing. This paper descripts about the proposed multimodal biometric system that combines the feature extraction level and the score level fusion of iris and face unimodal biometric systems in order to take advantage of both fusion techniques. The experimental results shows the performance of the multimodal and multilevel fusion techniques which are analysed using TRR and TAR to study the recognition behaviour of the approach system. From the ROC Curve plotted, the performance of the proposed system is better compared to the individual fusion techniques.
This document summarizes various image compression techniques including lossy and lossless methods. For lossy compression, it discusses transformation coding techniques like discrete cosine transform (DCT) and discrete wavelet transform (DWT). It also covers vector quantization, fractal coding, block truncation coding, and subband coding. For lossless techniques, it describes run length encoding, Huffman coding, LZW coding, and area coding. Finally, it provides an overview of using neural networks for image compression, including backpropagation networks, hierarchical/adaptive networks, and multilayer perceptron networks.
This document summarizes an article from the International Journal of Civil Engineering and Technology that examines optimizing cropping patterns in an irrigation project area. It begins with background on the journal and outlines the objectives of determining optimal cropping patterns given land and water availability. It then reviews previous related studies and describes the study area which is the Mayurakshi Command Area in India. Key concepts are introduced, such as dividing the area into blocks based on agronomic, economic, climatic and rainfall factors. The problem is formulated with the objective of maximizing net return defined as revenue from crop sales minus expenses. Equations are provided to calculate net return per unit area for each crop, soil class, and block based on yield, prices
This document summarizes a research paper that presents a novel U-slot loaded circular microstrip antenna design for triple band operation. Key points:
- The antenna is designed to operate between 5.57-8.34 GHz with a maximum bandwidth of 10.34% and peak gain of 1.85 dB.
- It uses a low-cost glass epoxy substrate and a simple fabrication process.
- Experimental results show the antenna achieves triple band operation through the addition of a U-shaped slot on the circular patch and an H-shaped slot on the ground plane.
- Radiation patterns were broadside and linearly polarized across the operating bands.
- The simple design makes this
The document summarizes an experimental investigation into the performance and emissions of a diesel engine fueled with preheated corn oil methyl ester (COME) biodiesel at different temperatures. COME was produced via transesterification of corn oil with methanol. The engine was tested using diesel and blends of preheated COME at 50°C, 70°C, and 90°C. Brake thermal efficiency increased and BSFC decreased with COME preheated to 70°C due to improved combustion from reduced viscosity. Exhaust emissions of CO and HC decreased but NOx increased with COME. Performance generally decreased as the COME percentage in blends rose. Preheating COME to 70°C allowed
The techniques and methods adopted in the medical college libraries locatedIAEME Publication
This document summarizes a study on the techniques and methods adopted in medical college libraries located in southern Karnataka, India. It finds that while the libraries generally meet basic needs, there is room for improvement in adapting new tools and techniques, training librarians, increasing staff and resources, and enhancing user services. For example, over 60% of libraries lacked dedicated IT staff, many librarians lacked computer training, and several libraries had collections and facilities below Medical Council of India standards. The study provides benchmarks for improving medical college libraries in the region.
This document summarizes a research paper on a two element U-slot loaded circular microstrip array antenna designed for dual band wireless local area network (WLAN) applications. The antenna was fabricated on a glass epoxy substrate measuring 9 x 5 x 0.16 cm3. Testing showed it achieved a maximum bandwidth of 44.9% between 4.37-6.9 GHz and was able to reduce the size of the antenna by 6.01% while maintaining a peak gain of 5.24 dB. Analysis of the radiation patterns found it exhibited broadside and linearly polarized radiation characteristics. The proposed antenna design was concluded to be simple, low-cost, and suitable for WLAN communication systems.
This document describes a data acquisition system and wireless telemetry system for unmanned aerial vehicles participating in the Advanced class of the SAE Aero Design competition. The system uses a laser altimeter and GPS module to measure altitude and location, which are sent via an Arduino microcontroller and XBee modules to a base station laptop. The laptop displays the measurements and has a button to trigger cargo release based on the vehicle's position and trajectory calculations. The system was designed to meet the competition's requirements of long range, low power, accurate measurements, and small size for recording altitude and assisting precise cargo deployment from UAVs.
An experimental study is conducted to determine the thermal output of a closed enclosure containing two cylindrical tubes through which biomass is burned. Temperature and energy measurements are taken at various points in the system. Convection and radiation are found to account for 33% of the total energy contained in the fuel wood, representing the useful thermal energy for applications like drying. Mathematical models are developed to describe the thermal energy flows and efficiency of the heat exchanger system.
This document summarizes a research paper that presents a dual open stub loaded square microstrip antenna (DOSMSA) for quad-band wireless communication applications. The DOSMSA consists of a square patch with two open stubs placed diagonally. Measurements show the DOSMSA operates over four bands from 2.88-8.55 GHz with gains up to 3.21 dB, compared to 0.8 dB for a basic square microstrip antenna. The DOSMSA design achieves quad-band operation using a simple stub loading technique while reducing the antenna's copper area by 7.8% compared to the basic design. The antenna has broadside radiation patterns and could enable applications in WLAN and WiMAX
Improvement in Error Resilience in BIST using hamming codeIJMTST Journal
In the current scenario of IP core based SoC, to test the CUT we need to communication link between Circuit Under Test and ATPG, so before applying to actual DUT. If there is a problem with this link, there may be a lip in bit of test data. Compared to original test data, if there is a bit lip in the original data, the codeword may change and hence the decompressed data will have a large number of bit deviation. This deviation in bits can severely degrade the test quality and overall fault coverage which may affect yield. The error resilience is the capability of the test data to resist against such bit lips. Here in this paper, the earlier methods of error resilience is compared and a Hamming code based error resilience technique is proposed to improve the error resilience capacity of compressed test data. This method is applied on Huffman code based compressed test data of widely used ISCAS benchmark circuits. The fault coverage measurement results show the effectiveness of the proposed method. The basic goal here is to survey the effect of bit lips on fault coverage and prepare a platform for further development in this avenue.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
Data reduction techniques to analyze nsl kdd datasetIAEME Publication
The document discusses applying data reduction techniques to the NSL-KDD dataset to analyze network intrusion detection data. It describes how data reduction can minimize data size without losing important information. The document applies several data reduction algorithms to the NSL-KDD dataset and uses the output to train and test two classification algorithms, J48 and Naive Bayes. The results are compared based on accuracy, specificity, and sensitivity to determine which data reduction technique improves classification performance the most. The goal is to find an effective and efficient way to analyze large network intrusion detection datasets using data reduction and machine learning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET- Enhanced Density Based Method for Clustering Data StreamIRJET Journal
The document presents a new incremental density-based algorithm called Enhanced Density-based Data Stream (EDDS) for clustering data streams. EDDS modifies the traditional DBSCAN algorithm to represent clusters using only surface core points. It detects clusters and outliers in incoming data chunks, merges new clusters with existing ones, and filters outliers for the next round. The algorithm prunes internal core points using heuristics and removes aged core points/outliers using a fading function. It was evaluated on datasets and found to improve clustering correctness with time complexity comparable to existing methods.
MATRIX CODE BASED MULTIPLE ERROR CORRECTION TECHNIQUE FOR N-BIT MEMORY DATAVLSICS Design
Constant shrinkage in the device dimensions has resulted in very dense memory cells. The probability of occurrence of multiple bit errors is much higher in very dense memory cells. Conventional Error Correcting Codes (ECC) cannot correct multiple errors in memories even though many of these are capable of detecting multiple errors. This paper presents a novel decoding algorithm to detect and correct multiple errors in memory based on Matrix Codes. The algorithm used is such that it can correct a
maximum of eleven errors in a 32-bit data and a maximum of nine errors in a 16-bit data. The proposed method can be used to improve the memory yield in presence of multiple-bit upsets. It can be applied for correcting burst errors wherein, a continuous sequence of data bits are affected when high energetic particles from external radiation strike memory, and cause soft errors. The proposed technique performs better than the previously known technique of error detection and correction using Matrix Codes.
Black Box Model based Self Healing Solution for Stuck at Faults in Digital Ci...IJECEIAES
The paper proposes a design strategy to retain the true nature of the output in the event of occurrence of stuck at faults at the interconnect levels of digital circuits. The procedure endeavours to design a combinational architecture which includes attributes to identify stuck at faults present in the intermediate lines and involves a healing mechanism to redress the same. The simulated fault injection procedure introduces both single as well as multiple stuck-at faults at the interconnect levels of a two level combinational circuit in accordance with the directives of a control signal. The inherent heal facility attached to the formulation enables to reach out the fault free output even in the presence of faults. The Modelsim based simulation results obtained for the Circuit Under Test [CUT] implemented using a Read Only Memory [ROM], proclaim the ability of the system to survive itself from the influence of faults. The comparison made with the traditional Triple Modular Redundancy [TMR] exhibits the superiority of the scheme in terms of fault coverage and area overhead.
Survey on Software Data Reduction Techniques Accomplishing Bug TriageIRJET Journal
This document discusses various techniques for software data reduction to improve the accuracy of bug triage. It first provides background on bug triage and the challenges it aims to address like large volumes of low quality bug data. It then surveys literature on related techniques like automated test generation and text mining approaches. The document describes various text mining methods like term-based, phrase-based, concept-based and pattern taxonomy methods. It also covers data reduction techniques and their benefits for bug triage. Different classification techniques for bug identification are explained, including decision trees, nearest neighbor classifier and artificial neural networks.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
AN OPTIMIZED BLOCK ESTIMATION BASED IMAGE COMPRESSION AND DECOMPRESSION ALGOR...IAEME Publication
This document presents a new optimized block estimation based image compression and decompression algorithm. The proposed method divides images into blocks and estimates each block from the previous frame using sum of absolute differences to determine the best matching block. It then compresses the luminance channel using JPEG-LS coding and predicts chrominance channels using hierarchical decomposition and directional prediction. Experimental results on test images show the proposed method achieves higher compression rates and lower distortion compared to traditional models that use hierarchical schemes and raster scan prediction.
Partitioning of Query Processing in Distributed Database System to Improve Th...IRJET Journal
This document discusses improving query processing throughput in distributed database systems through partitioning algorithms. It proposes using a graph partitioning algorithm called Congestion Avoidance (CA) to partition query tasks in a way that avoids system congestion and improves throughput. The CA algorithm iteratively identifies congestion points that reduce throughput and moves tasks between partitions to potentially increase throughput. It is evaluated as being faster than other partitioning algorithms while achieving comparable throughput improvements. A parallel execution algorithm is also used to concurrently execute partitioned query tasks across distributed nodes to minimize latency and further improve throughput.
Improved Error Detection and Data Recovery Architecture for Motion Estimation...IJERA Editor
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. The proposed design can effectively detect errors and recover data with acceptable area overhead and timing penalty. It performs satisfactorily in terms of throughput and reliability for motion estimation testing.
The document discusses an improved error detection and data recovery architecture for motion estimation testing applications. It presents a residue-and-quotient (RQ) code-based design to embed into motion estimation for detecting and recovering from errors in processing elements. Experimental results show the design can detect errors and recover data with acceptable overhead in area and timing. It also performs satisfactorily in terms of throughput and reliability for motion estimation testing.
Robust Fault Tolerance in Content Addressable Memory InterfaceIOSRJVSP
With the rapid improvement in data exchange, large memory devices have come out in recent past. The operational controlling for such large memory has became a tedious task due to faster, distributed nature of memory units. In the process of memory accessing it is observed that data written or fetched are often encounter with fault location and faulty data are written or fetched from the addressed locations. In real time applications, this error cannot be tolerated as it leads to variation in the operational condition dependent on the memory data. Hence, It is required to have an optimal controlling fault tolerance in content addressable memory. In this paper, we present an approach of fault tolerance approach by controlling the fault addressing overhead, by introducing a new addressing approach using redundant control modeling of fault address unit. The presented approach achieves the objective of fault controlling over multiple fault location in different dimensions with redundant coding.
Developing and comparing an encoding system using vector quantization &IAEME Publication
This document summarizes research on developing and comparing an encoding system using vector quantization and edge detection. Vector quantization maps sets of input data to single codewords, allowing for data compression. Edge detection identifies sharp discontinuities in images to preserve structural properties while reducing data. The researchers applied vector quantization and various edge detection techniques like Canny edge detection to compress video frames. They compared the compression ratio and edge detection results of their approach to existing methods like Haar transforms. Their system was able to achieve higher compression ratios and better edge detection performance compared to Haar transforms.
Developing and comparing an encoding system using vector quantization &IAEME Publication
Developing and comparing an encoding system using vector quantization and edge detection for image and video compression. Downsample methods provide higher compression ratios and better edge detection compared to Haar transforms, with encoding times of microseconds for images versus seconds for Haar. Edge detection on compressed videos and images shows compression is effective at preserving edges. Compression ratios increased up to 99.67% with downsample versus 75% for Haar, with downsample providing more accurate edge detection. The encoding system applies vector quantization for lossy data compression and edge detection techniques to evaluate compression quality.
Image Compression Through Combination Advantages From Existing TechniquesCSCJournals
The tremendous growth of digital data has led to a high necessity for compressing applications either to minimize memory usage or transmission speed. Despite of the fact that many techniques already exist, there is still space and need for new techniques in this area of study. With this paper we aim to introduce a new technique for data compression through pixel combinations, used for both lossless and lossy compression. This new technique is also able to be used as a standalone solution, or with some other data compression method as an add-on providing better results. It is here applied only on images but it can be easily modified to work on any other type of data. We are going to present a side-by-side comparison, in terms of compression rate, of our technique with other widely used image compression methods. We will show that the compression ratio achieved by this technique tanks among the best in the literature whilst the actual algorithm remains simple and easily extensible. Finally the case will be made for the ability of our method to intrinsically support and enhance methods used for cryptography, steganography and watermarking.
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.