The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
A Wavelet - Based Object Watermarking System for MPEG4 VideoCSCJournals
Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, object-based watermarking schemes are needed for copyright protection. This paper presents a novel object based watermarking solution for MPEG4 video authentication using the shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy compression (e.g. MPEG1 and MPEG2,MPEG-4,H264)
SQUASHED JPEG IMAGE COMPRESSION VIA SPARSE MATRIXijcsit
To store and transmit digital images in least memory space and bandwidth image compression is needed. Image compression refers to the process of minimizing the image size by removing redundant data bits in a manner that quality of an image should not be degrade. Hence image compression reduces quantity of the image size without reducing its quality. In this paper it is being attempted to enhance the basic JPEG compression by reducing image size. The proposed technique is about amendment of the conventional run length coding for JPEG (Joint Photographic Experts Group) image compression by using the concept of sparse matrix. In this algorithm, the redundant data has been completely eliminated and hence leaving the quality of an image unaltered. The JPEG standard document specifies three steps: Discrete cosine transform, Quantization followed by Entropy coding. The proposed work aims at the enhancement of the third step which is Entropy coding.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
An Efficient Block Matching Algorithm Using Logical ImageIJERA Editor
Motion estimation, which has been widely used in various image sequence coding schemes, plays a key role in the transmission and storage of video signals at reduced bit rates. There are two classes of motion estimation methods, Block matching algorithms (BMA) and Pel-recursive algorithms (PRA). Due to its implementation simplicity, block matching algorithms have been widely adopted by various video coding standards such as CCITT H.261, ITU-T H.263, and MPEG. In BMA, the current image frame is partitioned into fixed-size rectangular blocks. The motion vector for each block is estimated by finding the best matching block of pixels within the search window in the previous frame according to matching criteria. The goal of this work is to find a fast method for motion estimation and motion segmentation using proposed model. Recent day Communication between ends is facilitated by the development in the area of wired and wireless networks. And it is a challenge to transmit large data file over limited bandwidth channel. Block matching algorithms are very useful in achieving the efficient and acceptable compression. Block matching algorithm defines the total computation cost and effective bit budget. To efficiently obtain motion estimation different approaches can be followed but above constraints should be kept in mind. This paper presents a novel method using three step and diamond algorithms with modified search pattern based on logical image for the block based motion estimation. It has been found that, the improved PSNR value obtained from proposed algorithm shows a better computation time (faster) as compared to original Three step Search (3SS/TSS ) method .The experimental results based on the number of video sequences were presented to demonstrate the advantages of proposed motion estimation technique.
A Wavelet - Based Object Watermarking System for MPEG4 VideoCSCJournals
Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, object-based watermarking schemes are needed for copyright protection. This paper presents a novel object based watermarking solution for MPEG4 video authentication using the shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy compression (e.g. MPEG1 and MPEG2,MPEG-4,H264)
Repeat-Frame Selection Algorithm for Frame Rate Video TranscodingCSCJournals
To realize frame rate transcoding, the forward frame repeat mechanism is usually adopted to compensate the skipped frames in the video decoder for end-user. However, based on our observation, it is unsuitable for repeating all skipped frames only in the forward direction and sometimes the backward repeat may achieve better results. To deal with this issue, we propose the new reference frame selection method to determine the direction of repeat-frame for skipped Predictive (P) and Bidirectional (B) frames. For P-frame, the non-zero transformed coefficients and the magnitude of motion vectors are taken into consideration to determine the use of forward or backward repeat. For B-frame, the magnitude of motion vector and its corresponding reference directions of the blocks in B-frame are selected to be the decision criteria. Experimental results show that the proposed method provides 1.34 dB and 1.31 dB PSNR improvements in average for P and B frames, respectively, compared with forward frame repeat.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
DIGITAL VIDEO HOLOGRAPHY- A ROBUST TOOL FOR COMMUNICATIONcscpconf
Holograms are being produced using optical methods for decades. A lot of techniques and
methods exist for the production of efficient holograms. Digital Holography (DH) is
the method of simulating holograms with the use of computer. In this paper digital holograms
are generated using Fresnel and Fraunhofer diffraction integrals. Multi color holograms are
simulated and the digitally generated holograms are analysed. DH technique is extended
further to video format which yields video holograms. The concept that every bit of a hologram
contains full information of the original video, which is being effectively utilized to reduce the
file size required for communication in terms of storage, security and speed. The entire process
is simulated using Matlab7.10 environment.
Image compression using embedded zero tree waveletsipij
Compressing an image is significantly different than compressing raw binary data. compressing images is
used by this different compression algorithm. Wavelet transforms used in Image compression methods to
provide high compression rates while maintaining good image quality. Discrete Wavelet Transform (DWT)
is one of the most common methods used in signal and image compression .It is very powerful compared to
other transform because its ability to represent any type of signals both in time and frequency domain
simultaneously. In this paper, we will moot the use of Wavelet Based Image compression algorithm-
Embedded Zerotree Wavelet (EZW). We will obtain a bit stream with increasing accuracy from ezw
algorithm because of basing on progressive encoding to compress an image into . All the numerical results
were done by using matlab coding and the numerical analysis of this algorithm is carried out by sizing
Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR) for standard Lena Image .Experimental
results beam that the method is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...CSCJournals
Video contents retrieval and semantics research attract very important number of researchers in video processing and analysis domain. The researchers tries to propose structure or frameworks to extract the content of the video that's integrate many algorithms using low and high level features. The framework will be efficient if it is very simple and integrate the generic behavior. In this paper we present a framework for automatic soccer video summaries and highlights extraction using audio/video features and an enhanced generic algorithm for dominant color extraction.
IMAGE AUTHENTICATION THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IAZT)IJNSA Journal
In this paper a Z-transform based image authentication technique termed as IAZT has been proposed to
authenticate gray scale images. The technique uses energy efficient and low bandwidth based invisible data
embedding with a minimal computational complexity. Near about half of the bandwidth is required
compared to the traditional Z–transform while transmitting the multimedia contents such as images with
authenticating message through network. This authenticating technique may be used for copyright
protection or ownership verification. Experimental results are computed and compared with the existing
authentication techniques like Li’s method [11], SCDFT [13], Region-Based method [14] and many more
based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Image Fidelity (IF), Universal
Quality Image (UQI) and Structural Similarity Index Measurement (SSIM) which shows better performance
in IAZT.
Image similarity using symbolic representation and its variationssipij
This paper proposes a new method for image/object retrieval. A pre-processing technique is applied to
describe the object, in one dimensional representation, as a pseudo time series. The proposed algorithm
develops the modified versions of the SAX representation: applies an approach called Extended SAX
(ESAX) in order to realize efficient and accurate discovering of important patterns, necessary for retrieving
the most plausible similar objects. Our approach depends upon a table contains the break-points that
divide a Gaussian distribution in an arbitrary number of equiprobable regions. Each breakpoint has more
than one cardinality. A distance measure is used to decide the most plausible matching between strings of
symbolic words. The experimental results have shown that our approach improves detection accuracy.
Website security is a critical issue that needs to be considered in the web, in order to run your online business healthy and
smoothly. It is very difficult situation when security of website is compromised when a brute force or other kind of attacker attacks on
your web creation. It not only consume all your resources but create heavy log dumps on the server which causes your website stop
working.
Recent studies have suggested some backup and recovery modules that should be installed into your website which can take timely
backups of your website to 3rd party servers which are not under the scope of attacker. The Study also suggested different type of
recovery methods such as incremental backups, decremental backups, differential backups and remote backup.
Moreover these studies also suggested that Rsync is used to reduce the transferred data efficiently. The experimental results show
that the remote backup and recovery system can work fast and it can meet the requirements of website protection. The automatic backup
and recovery system for Web site not only plays an important role in the web defence system but also is the last line for disaster
recovery.
This paper suggests different kind of approaches that can be incorporated in the WordPress CMS to make it healthy, secure and
prepared web attacks. The paper suggests various possibilities of the attacks that can be made on CMS and some of the possible
solutions as well as preventive mechanisms.
Some of the proposed security measures –
1. Secret login screen
2. Blocking bad boats
3. Changing db. prefixes
4. Protecting configuration files
5. 2 factor security
6. Flight mode in Web Servers
7. Protecting htaccess file itself
8. Detecting vulnerabilities
9. Unauthorized access made to the system checker
However, this is to be done by balancing the trade-off between website security and backup recovery modules of a website, as measures
taken to secure web page should not affect the user‟s experience and recovery modules
Steganography using Interpolation and LSB with Cryptography on Video Images-A...Editor IJCATR
Stegnography is the most common term used in the IT industry, which specifically means, "covered writing" and is derived
from the Greek language. Stegnography is defined as the art and science of invisible communication i.e. it hides the existence of the
communication between the sender and the receiver. In distinction to Cryptography, where the opponent is permitted to detect,
interrupt and alter messages without being able to breach definite security grounds guaranteed by the cryptosystem, the prime
objective of Stegnography is to conceal messages inside other risk-free messages in a manner that does not agree to any enemy to even
sense that there is any second message present. Nowadays, it is an emerging area which is used for secured data transmission over any
public medium such as internet. In this research a novel approach of image stegnography based on LSB (Least Significant Bit)
insertion and cryptography method for the lossless jpeg images has been projected. This paper is comprising an application which
ranks images in a users library on the basis of their appropriateness as cover objects for some facts. Here, the data is matched to an
image, so there is a less possibility of an invader being able to employ steganalysis to recuperate the data. Furthermore, the application
first encrypts the data by means of cryptography and message bits that are to be hidden are embedded into the image using Least
Significant Bits insertion technique. Moreover, interpolation is used to increase the density
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Multimedia Video transmission is over Wireless Local Area Networks is expected to be an important component of many
emerging multimedia applications. However, Wireless networks will always be bandwidth limited compared to fixed networks due to
background noise, limited frequency spectrum, and varying degrees of network coverage and signal strength One of the critical issues
for multimedia applications is to ensure that the Quality of Service (QoS) requirement to be maintained at an acceptable level. Modern
mobile devices are equipped with multiple network interfaces, including 3G/LTE WiFi. Bandwidth aggregation over LTE and WiFi
links offers an attractive opportunity of supporting bandwidth-intensive services, such as high-quality video streaming, on mobile
devices. Achieving effective bandwidth aggregation in wireless environments raises several challenges related to deployment, link
heterogeneity, Network congestion, network fluctuation, and energy consumption. In this work, an overview of schemes for video
transmission over wireless networks is presented where an acceptable quality of service (QoS) for video applications required realtime
video transmission is achieved
Classification and Searching in Java API Reference DocumentationEditor IJCATR
Application Program Interface (API) allows programmers to use predefined functions instead of writing them from scratch.
Description of API elements that is Methods, Classes, Constructors etc. is provided through API Reference Documentation. Hence
API Reference Documentation acts as a guide to user or developer to use API’s. Different types of Knowledge Types are generated by
processing this API Reference Documentation. And this Knowledge Types will be used for Classification and Searching of Java API
Reference Documentation
Repeat-Frame Selection Algorithm for Frame Rate Video TranscodingCSCJournals
To realize frame rate transcoding, the forward frame repeat mechanism is usually adopted to compensate the skipped frames in the video decoder for end-user. However, based on our observation, it is unsuitable for repeating all skipped frames only in the forward direction and sometimes the backward repeat may achieve better results. To deal with this issue, we propose the new reference frame selection method to determine the direction of repeat-frame for skipped Predictive (P) and Bidirectional (B) frames. For P-frame, the non-zero transformed coefficients and the magnitude of motion vectors are taken into consideration to determine the use of forward or backward repeat. For B-frame, the magnitude of motion vector and its corresponding reference directions of the blocks in B-frame are selected to be the decision criteria. Experimental results show that the proposed method provides 1.34 dB and 1.31 dB PSNR improvements in average for P and B frames, respectively, compared with forward frame repeat.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
Comparative Analysis of Lossless Image Compression Based On Row By Row Classi...IJERA Editor
Lossless image compression is needed in many fields like medical imaging, telemetry, geophysics, remote
sensing and other applications, which require exact replica of original image and loss of information is not
tolerable. In this paper, a near lossless image compression algorithm based on row by row classifier with
encoding schemes like Lempel Ziv Welch (LZW), Huffman and Run Length Encoding (RLE) on color images
is proposed. The algorithm divides the image into three parts R, G and B, apply row by row classification on
each part and result of this classification is records in the mask image. After classification the image data is
decomposed into two sequences each for R, G and B and mask image is hidden in them. These sequences are
encoded using different encoding schemes like LZW, Huffman and RLE. An exhaustive comparative analysis is
performed to evaluate these techniques, which reveals that the pro
DIGITAL VIDEO HOLOGRAPHY- A ROBUST TOOL FOR COMMUNICATIONcscpconf
Holograms are being produced using optical methods for decades. A lot of techniques and
methods exist for the production of efficient holograms. Digital Holography (DH) is
the method of simulating holograms with the use of computer. In this paper digital holograms
are generated using Fresnel and Fraunhofer diffraction integrals. Multi color holograms are
simulated and the digitally generated holograms are analysed. DH technique is extended
further to video format which yields video holograms. The concept that every bit of a hologram
contains full information of the original video, which is being effectively utilized to reduce the
file size required for communication in terms of storage, security and speed. The entire process
is simulated using Matlab7.10 environment.
Image compression using embedded zero tree waveletsipij
Compressing an image is significantly different than compressing raw binary data. compressing images is
used by this different compression algorithm. Wavelet transforms used in Image compression methods to
provide high compression rates while maintaining good image quality. Discrete Wavelet Transform (DWT)
is one of the most common methods used in signal and image compression .It is very powerful compared to
other transform because its ability to represent any type of signals both in time and frequency domain
simultaneously. In this paper, we will moot the use of Wavelet Based Image compression algorithm-
Embedded Zerotree Wavelet (EZW). We will obtain a bit stream with increasing accuracy from ezw
algorithm because of basing on progressive encoding to compress an image into . All the numerical results
were done by using matlab coding and the numerical analysis of this algorithm is carried out by sizing
Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR) for standard Lena Image .Experimental
results beam that the method is fast, robust and efficient enough to implement it in still and complex images
with significant image compression.
A Framework for Soccer Video Processing and AnalysisBased on Enhanced Algorit...CSCJournals
Video contents retrieval and semantics research attract very important number of researchers in video processing and analysis domain. The researchers tries to propose structure or frameworks to extract the content of the video that's integrate many algorithms using low and high level features. The framework will be efficient if it is very simple and integrate the generic behavior. In this paper we present a framework for automatic soccer video summaries and highlights extraction using audio/video features and an enhanced generic algorithm for dominant color extraction.
IMAGE AUTHENTICATION THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IAZT)IJNSA Journal
In this paper a Z-transform based image authentication technique termed as IAZT has been proposed to
authenticate gray scale images. The technique uses energy efficient and low bandwidth based invisible data
embedding with a minimal computational complexity. Near about half of the bandwidth is required
compared to the traditional Z–transform while transmitting the multimedia contents such as images with
authenticating message through network. This authenticating technique may be used for copyright
protection or ownership verification. Experimental results are computed and compared with the existing
authentication techniques like Li’s method [11], SCDFT [13], Region-Based method [14] and many more
based on Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Image Fidelity (IF), Universal
Quality Image (UQI) and Structural Similarity Index Measurement (SSIM) which shows better performance
in IAZT.
Image similarity using symbolic representation and its variationssipij
This paper proposes a new method for image/object retrieval. A pre-processing technique is applied to
describe the object, in one dimensional representation, as a pseudo time series. The proposed algorithm
develops the modified versions of the SAX representation: applies an approach called Extended SAX
(ESAX) in order to realize efficient and accurate discovering of important patterns, necessary for retrieving
the most plausible similar objects. Our approach depends upon a table contains the break-points that
divide a Gaussian distribution in an arbitrary number of equiprobable regions. Each breakpoint has more
than one cardinality. A distance measure is used to decide the most plausible matching between strings of
symbolic words. The experimental results have shown that our approach improves detection accuracy.
Website security is a critical issue that needs to be considered in the web, in order to run your online business healthy and
smoothly. It is very difficult situation when security of website is compromised when a brute force or other kind of attacker attacks on
your web creation. It not only consume all your resources but create heavy log dumps on the server which causes your website stop
working.
Recent studies have suggested some backup and recovery modules that should be installed into your website which can take timely
backups of your website to 3rd party servers which are not under the scope of attacker. The Study also suggested different type of
recovery methods such as incremental backups, decremental backups, differential backups and remote backup.
Moreover these studies also suggested that Rsync is used to reduce the transferred data efficiently. The experimental results show
that the remote backup and recovery system can work fast and it can meet the requirements of website protection. The automatic backup
and recovery system for Web site not only plays an important role in the web defence system but also is the last line for disaster
recovery.
This paper suggests different kind of approaches that can be incorporated in the WordPress CMS to make it healthy, secure and
prepared web attacks. The paper suggests various possibilities of the attacks that can be made on CMS and some of the possible
solutions as well as preventive mechanisms.
Some of the proposed security measures –
1. Secret login screen
2. Blocking bad boats
3. Changing db. prefixes
4. Protecting configuration files
5. 2 factor security
6. Flight mode in Web Servers
7. Protecting htaccess file itself
8. Detecting vulnerabilities
9. Unauthorized access made to the system checker
However, this is to be done by balancing the trade-off between website security and backup recovery modules of a website, as measures
taken to secure web page should not affect the user‟s experience and recovery modules
Steganography using Interpolation and LSB with Cryptography on Video Images-A...Editor IJCATR
Stegnography is the most common term used in the IT industry, which specifically means, "covered writing" and is derived
from the Greek language. Stegnography is defined as the art and science of invisible communication i.e. it hides the existence of the
communication between the sender and the receiver. In distinction to Cryptography, where the opponent is permitted to detect,
interrupt and alter messages without being able to breach definite security grounds guaranteed by the cryptosystem, the prime
objective of Stegnography is to conceal messages inside other risk-free messages in a manner that does not agree to any enemy to even
sense that there is any second message present. Nowadays, it is an emerging area which is used for secured data transmission over any
public medium such as internet. In this research a novel approach of image stegnography based on LSB (Least Significant Bit)
insertion and cryptography method for the lossless jpeg images has been projected. This paper is comprising an application which
ranks images in a users library on the basis of their appropriateness as cover objects for some facts. Here, the data is matched to an
image, so there is a less possibility of an invader being able to employ steganalysis to recuperate the data. Furthermore, the application
first encrypts the data by means of cryptography and message bits that are to be hidden are embedded into the image using Least
Significant Bits insertion technique. Moreover, interpolation is used to increase the density
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Multimedia Video transmission is over Wireless Local Area Networks is expected to be an important component of many
emerging multimedia applications. However, Wireless networks will always be bandwidth limited compared to fixed networks due to
background noise, limited frequency spectrum, and varying degrees of network coverage and signal strength One of the critical issues
for multimedia applications is to ensure that the Quality of Service (QoS) requirement to be maintained at an acceptable level. Modern
mobile devices are equipped with multiple network interfaces, including 3G/LTE WiFi. Bandwidth aggregation over LTE and WiFi
links offers an attractive opportunity of supporting bandwidth-intensive services, such as high-quality video streaming, on mobile
devices. Achieving effective bandwidth aggregation in wireless environments raises several challenges related to deployment, link
heterogeneity, Network congestion, network fluctuation, and energy consumption. In this work, an overview of schemes for video
transmission over wireless networks is presented where an acceptable quality of service (QoS) for video applications required realtime
video transmission is achieved
Classification and Searching in Java API Reference DocumentationEditor IJCATR
Application Program Interface (API) allows programmers to use predefined functions instead of writing them from scratch.
Description of API elements that is Methods, Classes, Constructors etc. is provided through API Reference Documentation. Hence
API Reference Documentation acts as a guide to user or developer to use API’s. Different types of Knowledge Types are generated by
processing this API Reference Documentation. And this Knowledge Types will be used for Classification and Searching of Java API
Reference Documentation
A Comparative Analysis of Slicing for Structured ProgramsEditor IJCATR
Program Slicing is a method for automatically decomposing programs by analyzing their data flow and control flow.
Slicing reduces the program to a minimal form called “slice” which still produces that behavior. Program slice singles out all
statements that may have affected the value of a given variable at a specific program point. Slicing is useful in program
debugging, program maintenance and other applications that involve understanding program behavior. In this paper we have
discuss the static and dynamic slicing and its comparison by taking number of examples
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
Spam filtering by using Genetic based Feature SelectionEditor IJCATR
Spam is defined as redundant and unwanted electronica letters, and nowadays, it has created many problems in business life such as
occupying networks bandwidth and the space of user’s mailbox. Due to these problems, much research has been carried out in this
regard by using classification technique. The resent research show that feature selection can have positive effect on the efficiency of
machine learning algorithm. Most algorithms try to present a data model depending on certain detection of small set of features.
Unrelated features in the process of making model result in weak estimation and more computations. In this research it has been tried
to evaluate spam detection in legal electronica letters, and their effect on several Machin learning algorithms through presenting a
feature selection method based on genetic algorithm. Bayesian network and KNN classifiers have been taken into account in
classification phase and spam base dataset is used.
A Review on a web based Punjabi t o English Machine Transliteration SystemEditor IJCATR
The paper presents the transliteration of noun phrases from Punjabi to English using statistical machine translation
approach.Transliteration maps the letters of source scrip
ts to letters of another language.Forward transliteration converts an original
word or phrase in the source language into a word in the target language.Backward transliteration is the reverse process that
converts
the transliterated word or phrase back int
o its original word or phrase.Transliteration is an important part of research in NLP.Natural
Language Processing (NLP) is the ability of a
computer program to understand human speech as it is spoken.NLP is an important
component of AI.Artificial Intellig
ence is a branch of science which deals with helping machines find solutions to complex programs
in a human like fashion.The transliteration system is going to developed using SMT.Statistical Machine Translation (SMT) is a
data
oriented statistical framewo
rk for translating text from one natural language to another based on the knowledge
Designing of Semantic Nearest Neighbor Search: SurveyEditor IJCATR
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects’
geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial
predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query
would instead ask for the restaurant that is the closest among those whose menus contain ―steak, spaghetti, brandy‖ all at the same
time. Currently the best solution to such queries is based on the IR2-tree, which, as shown in this paper, has a few deficiencies that
seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the
conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries
with keywords in real time. As verified by experiments, the proposed techniques outperform the IR2-tree in query response time
significantly, often by a factor of orders of magnitude.
IA Literature Review on Synthesis and Characterization of enamelled copper wi...Editor IJCATR
This paper discusses about the survey on the various magazines, conference papers and journals for understanding the
properties of enamelled copper wires mixed with nano fillers, fundamental methods for synthesis and characterization of carbon
nanotubes. From all these papers, it was noted that the research work carried out in an enamelled copper wires filled with nano fillers
has shown better results. It was also recorded that the research work was carried mostly with single metal catalysts and very little
amount of research work has been carried out on the synthesis of carbon nanotubes using bimetallic catalysts.
Dielectric and Thermal Characterization of Insulating Organic Varnish Used in...Editor IJCATR
In recent days, a lot of attention was being drawn towards the polymer nanocomposites for use in electrical applications due
to encouraging results obtained for their dielectric properties. Polymer nanocomposites were commonly defined as a combination of
polymer matrix and additives that have at least one dimension in the nanometer range scale. Carbon nanotubes were of a special
interest as the possible organic component in such a composite coating. The carbon atoms were arranged in a hexagonal network and
then rolled up to form a seamless cylinder which measures several nanometers across, but can be thousands of nanometers long. There
were many different types, but the two main categories are single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs),
which are made from multiple layers of graphite. Carbon nanotubes were an example of a nanostructure varying in size from 1-100
nanometers (the scale of atoms and molecules). Nano composites were one of the fastest growing fields in nanotechnology. Extensive
literature survey has been done on the nanocomposites, synthesis and preparation of nano filler. The following objectives were set
based on the literature survey and understanding the technology.
Complete study of Organic varnish and CNT
Chemical properties
Electrical properties
Thermal properties
Mechanical properties
Synthesis and characterization of carbon nanotubes
Preparation of polymer nanocomposites
Study of characteristics of the nanocomposite insulation
Dimensioning an insulation system requires exact knowledge of the type, magnitude and duration of the electric stress while
simultaneously considering the ambient conditions. But, on the other hand, properties of the insulating materials in question must also
be known, so that in addition to the proper material, the optimum, e.g. the most economical design of the insulation system must be
chosen.
Design of STT-RAM cell in 45nm hybrid CMOS/MTJ processEditor IJCATR
This paper evaluates the performance of Spin-Torque Transfer Random Access Memory (STT-RAM) basic memory cell
configurations in 45nm hybrid CMOS/MTJ process. Switching speed and current drawn by the cells have been calculated and
compared. Cell design has been done using cadence tools. The results obtained show good agreement with theoretical results.
A VIDEO COMPRESSION TECHNIQUE UTILIZING SPATIO-TEMPORAL LOWER COEFFICIENTSIAEME Publication
With the advancement of communication in recent trends, video compression plays an important role in the transmission of information on social networking and for storage with limited memory capacity. Also the inadequate bandwidth for transmission and lower quality make video compression a serious phenomenon to consider in the field of communication. There is a need to improve the video compression process which can encode the video data with low computational complexity with better quality along with maintaining speed. In this work, a new technique is developed based on the block processing utilizing the lower coefficients between frames.
In the current scenario compression of video files is in high demand. Color video compression has become a significant technology to lessen the memory space and to decrease transmission time. Video compression using fractal technique is based on self similarity concept by comparing the range block and domain block. However, its computational complexity is very high. In this paper we presented hybrid video compression technique to compress Audio/Video Interleaved file and overcome the problem of Computational complexity. We implemented Discrete Wavelet Transform and hybrid fractal HV partition technique using Particle Swarm Optimization (called mapping of PSO) for compression of videos. The analysis demonstrate that hybrid technique gives a very good speed up to compress video and achieve Peak Signal to Noise Ratio.
High Performance Architecture for Full Search Block matching Algorithmiosrjce
Video compression has two major issues to be handled, one is video compression rate and other one
is quality. There is always a trade-off between speed and quality. Full search block matching algorithm
(FSBMA) is most popular motion estimation algorithm. But high computational complexity is the major
challenge of FSBM. This makes FSBM to be very difficult to use for real time video processing with the low
power batteries. Other algorithm gives better speed on the expense of quality of video. The proposed algorithm
i.e. modified full search block matching algorithm (MFSBMA) reduces the computational complexity by keeping
the PSNR same as of FSBMA. MFSBMA skips the SAD calculations for a current background macroblock and it
does SAD calculations for foreground current microblock. This method reduces SAD calculations drastically.
This work presents apipelined architecture forMFSBMA which can work on real time HDTV video processing.
The proposed algorithm reduces computational complexity by 50% keeping PSNR same with the full search
algorithm.
Fast Motion Estimation for Quad-Tree Based Video Coder Using Normalized Cross...CSCJournals
Motion estimation is the most challenging and time consuming stage in block based video codec. To reduce the computation time, many fast motion estimation algorithms were proposed and implemented. This paper proposes a quad-tree based Normalized Cross Correlation (NCC) measure for obtaining estimates of inter-frame motion. The measure operates in frequency domain using FFT algorithm as the similarity measure with an exhaustive full search in region of interest. NCC is a more suitable similarity measure than Sum of Absolute Difference (SAD) for reducing the temporal redundancy in video compression since we can attain flatter residual after motion compensation. The degrees of homogeneous and stationery regions are determined by selecting suitable initial fixed threshold for block partitioning. An experimental result of the proposed method shows that actual numbers of motion vectors are significantly less compared to existing methods with marginal effect on the quality of reconstructed frame. It also gives higher speed up ratio for both fixed block and quad-tree based motion estimation methods.
Motion Compensation With Prediction Error Using Ezw Wavelet CoefficientsIJERA Editor
The video compression technique is used to represent any video with minimal distortion. In the compression
techniques of image processing, DWT is more significant because of its multi-resolution properties. DCT used
in video coding often produces undesirability. The main objective of video coding is reduce spatial and temporal
redundancies. In this proposed work a new encoder is designed by exploiting the multi – resolution properties of
DWT to get the prediction error, using motion estimation technique to avoid the translation invariance.
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...IJECEIAES
Segmentation of the video sequence by detecting shot changes is essential for video analysis, indexing and retrieval. In this context, a shot boundary detection algorithm is proposed in this paper based on the scale invariant feature transform (SIFT). The first step of our method consists on a top down search scheme to detect the locations of transitions by comparing the ratio of matched features extracted via SIFT for every RGB channel of video frames. The overview step provides the locations of boundaries. Secondly, a moving average calculation is performed to determine the type of transition. The proposed method can be used for detecting gradual transitions and abrupt changes without requiring any training of the video content in advance. Experiments have been conducted on a multi type video database and show that this algorithm achieves well performances.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
A Survey on Block Matching Algorithms for Video Coding IJECEIAES
Block matching algorithm (BMA) for motion estimation (ME) is the heart to many motion-compensated video-coding techniques/standards, such as ISO MPEG-1/2/4 and ITU-T H.261/262/263/264/265, to reduce the temporal redundancy between different frames. During the last three decades, hundreds of fast block matching algorithms have been proposed. The shape and size of search patterns in motion estimation will influence more on the searching speed and quality of performance. This article provides an overview of the famous block matching algorithms and compares their computational complexity and motion prediction quality.
A Survey on Block Matching Algorithms for Video Coding Yayah Zakaria
Block matching algorithm (BMA) for motion estimation (ME) is the heart to many motion-compensated video-coding techniques/standards, such as ISO MPEG-1/2/4 and ITU-T H.261/262/263/264/265, to reduce the temporal redundancy between different frames. During the last three decades,
hundreds of fast block matching algorithms have been proposed. The shape and size of search patterns in motion estimation will influence more on the searching speed and quality of performance. This article provides an overview of the famous block matching algorithms and compares their computational complexity and motion prediction quality.
MotionEstimation Technique forReal Time Compressed Video TransmissionIJERA Editor
Motion Estimation is one of the most critical modules in a typical digital video encoder. many implementation
tradeoffs should be considered while designing such a module. It candefine ME as a part of inter coding
technique.Inter coding refers to a mechanism of finding co-relation between two frames (stillimages), which are
not far away from each other as far as the order of occurrence is concerned, one frame is called the reference
frameand the other frame called the current frame, and then encoding the information which is a function of this
co-relation‟ instead of the frame itself. This paper focuses more on block matching algorithms which comes
under feature/region matching. Block motion estimation algorithms are widely adopted by video coding
standards, mainly due to their simplicity and good distortion performance In FS every candidate points will be
evaluated and more time will be taken for predicting the suitable motion vectors. based on the above noted
drawback, the above said adaptive pattern is proposed
Optimization is proposed at algorithm/code-level for both encoder and decoder to make it feasible to perform
real-time H.264/AVC video encoding/decoding on mobile device for mobile multimedia applications. For
encoder an improved motion estimation algorithm based on hexagonal pattern search is proposed exploiting the
temporal redundancy of a video sequence. For decoder, at code level memory access minimization scheme is
proposed and at algorithm level a fast interpolation scheme is proposed.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
Different Approach of VIDEO Compression Technique: A Study
1. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
Different Approach of VIDEO Compression Technique: A
Study
S. S. Razali
Faculty of Electronic and
Computer Engineering
Universiti Teknikal Malaysia
Melaka (UTeM)
K. A. A. Aziz
Faculty of Engineering
Technology
Universiti Teknikal Malaysia
Melaka (UTeM)
N. M. Z. Hashim
Faculty of Electronic and
Computer Engineering
Universiti Teknikal Malaysia
Melaka (UTeM)
A.Salleh
Faculty of Electronic and
Computer Engineering
Universiti Teknikal Malaysia
Melaka (UTeM)
S. Z. Yahya
Faculty of Electronic and
Computer Engineering
Universiti Teknikal Malaysia
Melaka (UTeM)
N. R. Mohamad
Faculty of Electronic and
Computer Engineering
Universiti Teknikal Malaysia
Melaka (UTeM)
Abstract: The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
Keywords: Compression, Losses, Memory, Transmission Bandwidth, Video Compression.
1. INTRODUCTION
Moving video images also call video is a main
component in internet world today. Video are integrated into
the application, website and almost everything that demand
the need for moving video images. The need for fast, reliable
and high quality video has taken the video compression
technology into the next level [1]-[5]. Video compression is a
technique that was used reduces redundancy and also reduces
the number of bits needed to send or store the video. This
paper summarize difference technique or method used in
video compression, these method are [6] accordion function,
[7] EZW and FMBW, [8] fuzzy concepts, [9] frame difference
approach and lastly [10] PCA method.
2. METHODS OF VIDEO
COMPRESSION
2.1 Method 1: Accordion Function
This paper will focus on reducing the storage space
for video signal [6]. Figure 1 shows process for block diagram
of video compression.
Figure 1: Block diagram of video compression
The technique compressed that involved in this
process is removal of spectral redundancy and removal of
temporal redundancy. Reduce redundancy which reduce
number of bits in order to transmit or keep video data is
known as video compression.
2.1.1 Video compression process
2.1.1.1 Removal of spectral redundancy
Basic color consists of RED, GREEN and BLUE
color, it represent 8 bit of each color which it 24 bits in total
to represent any color by adding different proportion of basic
color together. This means need more bits to show color
information of the given frame and also known as spectral
redundancy. The RED, GREEN and BLUE (RGB) need to
convert into YCbCr format in order to reduce number of bits.
Y introduced as Luminance component of the image, Cb as
chrome for blue color and Cr as chrome for red color.
Luminance or Luma represent variation between white to
black shades and for chrome it represents color information.
By using equation (1) and (3) respectively below the color
format of different type of video frames can convert into
YCbCr format:
For PAL:
PAL stands for Phase Alteration in Line. Figure 2 to
Figure 3 is result image when the frame is converted into
www.ijsea.com 143
2. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
YCbCr format. Figure 3 present the luminance values, Figure
4 present the Cb values and Figure 5 present the Cr values for
the input frame. From the beginning input video, different
frames were taken and assign those into structure and finally
were converted into YCbCr format.
Figure 2: Video Frame
Figure 3: Luminance component
Figure 4: Cb component
Figure 5: Cr component
Computation of accordion function is presented by
four consecutive video frames above as result. These four
steps were repeated in Figure 10 for all input video frames.
2.1.1.2 Removal of temporal of redundancy
Temporal redundancy is frequently present between
a set of frames which it takes only a few portion of each frame
itself involved with any motion that take part.
Figure 6: Temporal redundancy
Refer to Figure 6, the yellow box contains motion
and all other areas were constant called temporal redundancy.
Accordion is a process converted redundancy into spatial
redundancy. Temporal redundancy known as non-changing
data amount, by placing the successive frames adjacent to
each other it can eliminate the constant data in successive
frames. It can achieve by using accordion present of 4
successive frames.
Four consecutive groups of frames were taken and
merge to create an image in Accordion. The input video of
size M times N times 500 bits. Figure 7 shows its accordion
representation.
The corresponding column pixels were places
adjacent to each consecutive frame’s column. Refer to input
video the consecutive frames from 104 to 108 frame numbers
are present in Figure 8 (a), Figure 8 (b), Figure 9 (c) and
Figure 9 (d) respectively. Using shift and merge operations,
the accordion function was computed on these frames, which
represents the temporal redundancy as spatial representation
(Figure 10).
Figure 7 Accordion
Figure 8 (a) to (b)
Figure 9 (a) to (b)
Figure 10
2.2 Method 2: EZW and FSBM
Method that been used in this paper is EZW and 7
different algorithms type of block matching algorithms that
use in video compression for motion estimation [7]. 7
different algorithms: Exhaustive Search (ES), Three Step
Search (TSS), New Three Step Search (NTSS), Simple and
Efficient Search (SES), Four Step Search (4SS), Diamond
Search (DS) and Adaptive Rood Pattern Search (ARPS).
www.ijsea.com 144
3. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
Figure 11: Block diagram for video compression process flow
EZW stands for embedded zero tree wavelet
algorithm, it simple and effective image compression and
make the bits in the bit stream generate in terms of importance
yet yielding a complete embedded code. A sequence of binary
decisions that remove an image from the null image is
representing the embedded code. By using it, at any point
encoder locate the encoder can dismiss it, this allow a target
rate to be met exactly. Furthermore, EZW consistency
produces a compression result to produce a fully embedded
bit stream. In order to achieve with this technique it requires
no codebooks or training or knowledge of the image store.
2.2.1 Block Matching Algorithms
Every pixel within a macro block produce one
motion vector for each macro block has some motion activity.
The main idea is to divide the current frame into number of
macro blocks of fixed size. Also to create a motion vector
which it will comprises the location of the macro block of the
current frame in the previous frame. As always the macro
block is approach as a sequence of sixteen pixels and the
search area is seven pixels on all 4 sides in pervious frame of
the corresponding macro block. Based on the output of cost
functions it can create the matching of one macro block with
other.
7 different algorithms type of block matching algorithms:
i. Exhaustive search (ES).
Known as Full Search, is the most expansive
algorithm of all. This block matching algorithm calculates
the cost function location in the search window as result; it
gives the highest Peak-Signal-to-Noise-Ratio (PSNR) and
finds the best match that it can get. However there is a
disadvantage using ES which is the larger search window
the more computations need.
ii. Three Step Search (TSS)
The earliest in fast block matching back to middle
1980s. TSS search and picks the one that give less cost and
can make it as new search origin.
iii. New Three Step Search (NTSS)
To reduce cost of computational the TSS improved
to NTSS, New Three Step Search. NTSS having provisions
for half way stop and supply a center biased searching
scheme.
iv. Simple and Efficient Search (SES)
SES is another sub to TSS. In opposite direction it
cannot be 2 minimum unimodal surfaces and for eight point
which its fixed pattern search of TSS can save on
computation by changed it to incorporate. The algorithm
same as TSS three steps but the innovation make each steps
has more 2 phases.
v. Four Step Search (4SS)
Not to different from NTS, 4SS also have center
biased searching itself and halfway stop provision. For the
first step, pattern size of 4SS is fixed to S = 2 no matter
what the p parameter value is.
vi. Diamond Search (DS
Block matching algorithm DS is exactly the same as
4SS, but the search point pattern which is square changed
into diamond pattern and for algorithm there is no limit on
the number of steps make this algorithms can find global
minimum accurately. 2 difference types of pattern in DS
which are Large Diamond Search Pattern (LDSP) and Small
Diamond Search Pattern (SDSP). The 1st steps use LDSP
and last steps use SDSP.
vii. Adaptive Rood Pattern Search (ARPS)
ARPS show that the fact of general motion in a
frame is coherent. Use the predicted motion vector to check
the location pointed and it also can checks at a root
distributed points.
Figure 12: Original image
Figure 13: Compressed image using EZW method
Figure 14: Compression image using artificial colors.
www.ijsea.com 145
4. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
As result the original size of the video is 25 Mega
Byte and the compression video size is reduced to 21.3 Mega
Byte. The compression is complete with a good achievement
through compression ratio and factor.
2.3 Method 3: Fuzzy concepts
This paper focused on the fuzzy concept in the
video compression sector with H.264 algorithms and MPEG4
[8]. By applying H.264 in medical video compression and
then improve the rate control of H.264 algorithm with better
quality. In order to implement it, there is three steps to follow:
i. H.264 need to reviewed and meet the area medical
video compression.
ii. A new fuzzy based algorithm is introduce to each
frame of the video and make compression of based
H.264.
iii. Step is make two comparison between H.264 and
MPEG-4 and comparison between JVT-H014 (old
version of H.264 algorithm) and proposed fizzy
concept. Motion Picture Experts Group (MPEG4) and
H.263 are standards that are based on video
compression.
a) New fuzzy scheme
The frames will be fuzzified and then fuzzy based
segmentation is applied to it is created by a new brand
algorithm so each of medical images will have more
background that usually it only has ROI. Then the
isolation begins. In result it shows the minimum will be
compression and the maximum is to the background. The
cons with the medical video compression are that some
artifacts can always induced inside which make the
doctors see a false conclusion. For better result use fuzzy
method along with optimum bit rate scheme.
b) Comparison between MPEG-4 and H.264
Table 1 represent the results gain from H.264 and
MPEG-4 without put any rate control. For each sequence,
as the target bit rates which are 4 bit rates from high to
low are selected. The difference between H.264 and
MPEG-4, refer to PSNR gain and bit rate saving in the
table.
Table 1: Result achieved using MPEG-4 and H.264
When apply in the test medical video sequence, by
referring Table 1 it prove that H.264 performance much better
than MPEG-4. Means that H.264 is more effective than
MPEG-4 and it is an effective alternative solution for medical
video applications.
2.4 Method 4: Difference frame approach
This paper discussed about the video compression
technique that use algorithm based on frame of video [9]. This
technique calculates the frame differencing between the
consequences frames according to a specific threshold. This
technique concentrated on frames difference approaches that
are used in compression technique. There are many stages
implemented in this system. Figures 15 show the stages used
in this system:
Figure 15: the proposed frame selection key approach
However difference approach is the main core of the
system, in which pass the similar frames and select the
different frames depends on a certain specified threshold. In
addition the selected frames are compressed via applying 2-
dimensional discrete wavelet transform.
2.4.1 Frames Difference Approaches for Video
Compression
For removing the lowest frame difference, three
different methods are used to remove the identical frames in
which frames difference approaches among each consecutive
frame of extracted frames are applied. The Summary of the
structural diagram of the three approaches is shown in Figure
16.
Figure 16: Structural Diagram of frames difference approach
2.4.1.1 Zero Difference Approach
Frames are removed when the gap between any two
consecutive frames is zero. This will minimize the number of
frames to be re-extracted. Many of the frames difference
between consecutive frames are zero; in this case such frames
will be removed.
www.ijsea.com 146
5. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
Figure 17: Zero Difference approach
Figures 17, on the left top to bottom represent the
frames difference including zero's difference for frames per
second 15 and 20 respectively. Middle top to bottom
represents frames difference excluding zero's difference
frames, while the removed frames in which the frames
difference equal to zero are shown in Figures 16 on right top
to bottom. As increasing extracting frames per second 10, 15,
20 and 25, the frames difference equal to zero not necessarily
increased, since the frames differences in general decreased as
the frames per second increased but not exactly equal to zero.
2.4.1.2 Mean Difference Approach
The mean value of the frames difference is
calculated. Mathematically the average is acquired by
dividing the sum of the observed values of frames difference
by the number of observations. Then remove the frames
where the frames difference between any consecutive frames
is lower than the mean value of the frames difference.
Figure 18: Mean difference approach
Figures 18 from the left top to bottom represent the
frames difference including mean's difference for frames per
second 15 and 20 respectively. Middle part from top to
bottom of the figures represents frames difference including
frames below the mean of the overall difference frames; the
green line represents the mean of the frames differences. The
frames difference above the green line will be maintained and
compressed, while the frames difference below the green line
will be removed. Left part of the figures represents the frames
to be removed. As increasing extracting frames per second,
the frames difference decreased and the frames removed are
larger
2.4.1.3 Percentage Difference Approach
In this approach different types of videos are
examined according to frame details, frame size and the
obtained frames differences. This approach concerns mainly
in removing the lowest frames difference according to some
specific percentage. The percentage depends upon many
factors such as compression performance, frame details, frame
size and near distance between frames result and analysis.
Figure 19: Percentage difference approach
Figures 19 from left top to bottom represent the
frames difference for different frames per second 15, & 20
respectively. Middle part from top to bottom of the Figures
4.3 represents frames difference after removing 10% of the
lowest frames difference. Left Part of the figures 4.3
represents the frames to be removed. Also different
percentages like 15%, 20% were removed of the frames
difference.
2.5 Method 5: PCA method
This paper brings up the use of a statistical approach
for video compression as new Principle Component Analysis
(PCA) based method [10]. Based on necessary of accuracy
this method reveals the features of video frames and process
them flexibly. This main idea improves the quality of
compression effectively. Furthermore this method also
focused on the truth that video is a composition of correlated
and sequential frames or image, so that the PCA can be
applied in and the advantages of using this method is it does
not reduce the bandwidth of frequency response, so the
progress of frames won’t disappear.
2.5.1 PCA method
The values of the components of the matrixes are
Mgrayscale N×Pimages. N×Pgrayscale images are equivalent
to N×P matrixes. By applying this method, images are move
to another part. All images are put in 푿 matrix that its
elements are the intensity values of images.
Considering the first k Eigen vectors from the M
Eigen vectors (K<M), the 퐗 matrix that is a little bit different
from 퐗 matrix. The 퐗 matrix is compressed matrix which it
gain from 퐗. So the compression ratio - the ratio of the
required memory to save 퐗 to the neccasary memory to keep
퐗 - can be determined as below:
Memory ratio=
2.5.2 IMPROVED PCA
It is available to develop the quality of image
exactly and knew the most inaccurate pixels in the reassemble
frame or images. The 퐘 k matrix is changed to Ŷ k, which to
compensate the effect of error the nonzero columns in the
bottom of the matrix are applied:
www.ijsea.com 147
6. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
2.5.3 2DPCA Method
To get image vectors of images construct a high
dimensional image vector space the image matrixes need to
changed to vectors. This method uses 2D matrixes rather than
1D vectors in conventional PCA and 2DPCA has two
fundamental benefit over conventional PCA
a) Easy to design accurate covariance matrixes
b) Because of covariance matrixes size are smaller it took
less time to determine corresponding eigenvectors.
2.5.4 Improved PCA and 2DPCA for Video
This part represent the results of applying three
mentioned methods above: PCA, Improved PCA and 2DPCA
for compression of fourteen sample frames are show. The
frames are pick from throwing an apple in a white
background. Figure 19 shows the original frames. The frames
are organize from the top down and left to right.
Figure 20 : Based on the background of PCA, 2DPCA and
improved PCA in face database compression, the improved
PCA shows better results than two other methods.
Figure 21: PCA method added fade effects
Figure 21 shows the first three frames of applying
mentioned triple methods with same bitrates to the original
frames. As it seen, the PCA method added fade effects to the
reconstructed frames and applying 2DPCA causes vertical
lines and black bands to the frames. So in this application
(video compression), the best performance is provided by
improved PCA method.
3. DISCUSSIONS
Method Summary
Accordion function
In this proposed technique,
The compressions remove
the spectral redundancy and
temporal redundancy by
using Discrete Cosine
Transform and convert into
spatial redundancy.
EZW and FSBM
EZW is used as intra
compression and seven
different block matching
algorithm are used for
motion estimation in video
compression the result are
much better if using SPIHT
algorithm.
Fuzzy concepts
By using new fuzzy based
scheme H.264 video
compression are more
effective than MPEG-4 for
medical video.
Frame different approach
Video compression used
algorithm based on frame
of video. This technique
calculates the frame
differencing between the
consequences frames or
frame near distance
according to a specific
threshold.
PCA based method
PCA applied to high
correlated frame because of
video are composition of
sequential and correlated
frame. The edges of frame
do not fade because the
PCA method does not
reduce the bandwidth of
frequency response.
4. CONCLUSIONS
In this paper, five difference methods for video compression
are reviewed and discussed. There are still a lot of
possibilities for the improvement for lossless, small size, and
high quality video for the mentioned method, although these
method have a pros and cons. In the end, the best method for
video compression are the one that can bring faster, reliable
and smaller in size without affecting much on the video
quality. All of these requirements is necessary for multimedia
application, for user to enjoy the video.
5. ACKNOWLEDGMENTS
We are grateful to Centre for Telecommunication Research
and Innovation (CeTRI) and Universiti Teknikal Malaysia
Melaka (UTeM) through PJP/2013/FKEKK (29C)/S01215 for
their kind and help for supporting financially and supplying
the electronic components and giving their laboratory facility
to complete this study.
6. REFERENCES
[1] N. M. Z. Hashim, A. F. Jaafar, Z. Zakaria, A. Salleh,
and R. A. Hamzah, “Smart Casing for Desktop Personal
Computer,” International Journal of Engineering and
Computer Science (IJECS), vol. 2, no. 8, pp. 2337–
2342, 2013.
www.ijsea.com 148
7. International Journal of Science and Engineering Applications
Volume 3 Issue 5, 2014, ISSN-2319-7560 (Online)
[2] N. M. Z. Hashim, N. H. Mohamad, Z. Zakaria, H.
Bakri, and F. Sakaguchi, "Development of Tomato
Inspection and Grading System using Image
Processing," International Journal Of Engineering And
Computer Science (IJECS), vol. 2 no. 8, pp. 2319-2326,
2013.
[3] N. M. Z. Hashim, N. A. Ali, A. Salleh, A. S. Ja’afar,
and N. A. Z. Abidin, “Development of Optimal
Photosensors Based Heart Pulse Detector,” International
Journal Of Engineering and Technology (IJET), vol. 5,
no. 4, pp. 3601–3607, 2013.
[4] N. M. Z. Hashim, N. M. T. N. Ibrahim, Z. Zakaria, F.
Syahrial, and H. Bakri, “Development New Press
Machine using Programmable Logic Controller,”
International Journal of Engineering and Computer
Science (IJECS), vol. 2, no. 8, pp. 2310–2314, 2013.
[5] N. M. Z. Hashim, N. A. Ibrahim, N. M. Saad, F.
Sakaguchi, and Z. Zakaria, “Barcode Recognition
System,” International Journal of Emerging Trends &
Technology in Computer Science (IJETTCS), vol. 2, no.
4, pp. 278–283, 2013.
[6] Mitesh Shah, Hetal Patel, "Design of a New Video
Compression Algorithm Using Accordion Function",
International Journal of Science and Modern
Engineering (IJISME) ISSN: 2319-6386, Volume-1,
Issue-6, May 2013.
[7] Sangeeta Mishra, Sudhir Savarkar, “Video Compression
Using EZW and FSBM”, International Journal of
Scientific and Research Publications, Volume 2, Issue
10, October 2012, ISSN 2250-3153.
[8] MuzhirShaban Al-Ani and Talal Ali Hammouri, Video
Compression Algorithm Based on Frame Difference
Approaches International Journal on Soft Computing
(IJSC) Vol.2, No.4, November 2011.
[9] J. Mohanalin Rajarathnam, A novel Fuzzy based
Medical video compression using H.264, Georgian
Electronic ScientificJournal: Computer Science and
Telecommunications 2008|No.3 (17).
[10] Mostafa Mofarreh-Bonab, Mohamad Mofarreh-Bonab,
Adaptive Video Compression using PCA Method
International Journal of Computer Applications (0975 –
8887) Volume 44– No.21, April 2012.
www.ijsea.com 149