A new artifact removal method as cascade of Gaussian fuzzy edge decider and fuzzy image correction is proposed. In this design, a highly compressed i.e. low bit rate image is considered. Here, each overlapped block of image is fed to a Gaussian fuzzy based decider to check whether the central pixel of image block needs correction. Hence, the central pixel of overlapped block is corrected by fuzzy gradient of its neighbors. Experimental results shows remarkable improvement with presented gFAR algorithm compared to the past methods subjectively visual quality and objectively PSNR . Deepak Gambhir "Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33361.pdf Paper Url: https://www.ijtsrd.com/computer-science/multimedia/33361/gaussian-fuzzy-blocking-artifacts-removal-of-high-dct-compressed-images/deepak-gambhir
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
The document compares Gaussian filtering and bilateral filtering on RGB images. Gaussian filtering replaces each pixel with a weighted average of neighboring pixels, with weights determined by a Gaussian function of distance. This blurs the image while reducing high-frequency components. Bilateral filtering also considers pixel intensity differences, using a range Gaussian to give higher weight to similar pixels, thus preserving edges while smoothing. The effect of varying the spatial and range parameters is examined, showing bilateral filtering better retains edges through the additional range parameter.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSIONsipij
This document presents a new method for fusing multi-focus images called the Neighbor Local Variability (NLV) method. The NLV method calculates the local variability of each pixel based on the quadratic difference between the pixel value and those of neighboring pixels. This variability is used to weight pixel values when fusing the images. The optimal size of the neighborhood considered depends on the variance and size of the blur filter, and can be estimated using a provided model. The NLV method is compared to other fusion methods and is shown to produce better results based on the Root Mean Square Error evaluation metric.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
The document compares Gaussian filtering and bilateral filtering on RGB images. Gaussian filtering replaces each pixel with a weighted average of neighboring pixels, with weights determined by a Gaussian function of distance. This blurs the image while reducing high-frequency components. Bilateral filtering also considers pixel intensity differences, using a range Gaussian to give higher weight to similar pixels, thus preserving edges while smoothing. The effect of varying the spatial and range parameters is examined, showing bilateral filtering better retains edges through the additional range parameter.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSIONsipij
This document presents a new method for fusing multi-focus images called the Neighbor Local Variability (NLV) method. The NLV method calculates the local variability of each pixel based on the quadratic difference between the pixel value and those of neighboring pixels. This variability is used to weight pixel values when fusing the images. The optimal size of the neighborhood considered depends on the variance and size of the blur filter, and can be estimated using a provided model. The NLV method is compared to other fusion methods and is shown to produce better results based on the Root Mean Square Error evaluation metric.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
1) The document proposes a new simple method to reduce visual block artifacts in images compressed using DCT (used in JPEG) for urban surveillance systems.
2) The method smooths only the connection edges between adjacent blocks while keeping other image areas unchanged.
3) Simulation results show the proposed method achieves better image quality as measured by PSNR compared to median and wiener filters, while using significantly less computational resources.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
An Improved Adaptive Steganographic Method Based on Least Significant Bit Sub...IOSRJVSP
This paper presents a novel technique for improved data embedding in cover images based on least significant bit and pixel-value differencing. The proposed method is based on the properties of human visual system i.e. eyes can tolerate larger changes in edge areas as compared to smooth areas. Therefore, the method utilizes the HVS concept and hides large amount of secret data in edge areas while less amount of data in smooth areas. The results of the proposed method are verified using extensive simulations.
A Novel PSNR-B Approach for Evaluating the Quality of De-blocked Images IOSR Journals
This document discusses evaluating the quality of deblocked images using different quality assessment metrics. It proposes a new metric called PSNR-B that includes a blocking effect factor in PSNR calculations. The document compares PSNR-B to PSNR and SSIM metrics. It studies the effect of quantization step size on measured image quality and analyzes how deblocking algorithms like lowpass filtering can reduce blocking artifacts but also introduce new distortions. Simulation results show PSNR-B correlates better than PSNR with subjective quality judgments of deblocked images.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
Pixel Recursive Super Resolution.
Ryan Dahl, Mohammad Norouzi & Jonathon Shlens
Google Brain.
Abstract
We present a pixel recursive super resolution model that
synthesizes realistic details into images while enhancing
their resolution. A low resolution image may correspond
to multiple plausible high resolution images, thus modeling
the super resolution process with a pixel independent conditional
model often results in averaging different details–
hence blurry edges. By contrast, our model is able to represent
a multimodal conditional distribution by properly modeling
the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We
employ a PixelCNN architecture to define a strong prior
over natural images and jointly optimize this prior with a
deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
The document presents a method for detecting copy-move forgery in digital images using center-symmetric local binary pattern (CS-LBP). The key steps are:
1. The input image is converted to grayscale and divided into overlapping blocks.
2. CS-LBP features are extracted from each block to represent textures while being invariant to illumination and rotation. This reduces the feature dimensions compared to local binary pattern (LBP).
3. The distances between feature vectors are calculated and sorted lexicographically to group similar blocks together. Distances below a threshold indicate copied regions.
4. Post-processing with morphological operations fills holes and removes outliers to generate a mask of the forged regions.
The
Qcce quality constrained co saliency estimation for common object detectionKoteswar Rao Jerripothula
Despite recent advances in joint processing of images,
sometimes it may not be as effective as single image
processing for object discovery problems. In this paper while
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained
Co-saliency Estimation method. The approach here is to iteratively
update the saliency maps through co-saliency estimation
depending upon quality scores, which indicate the degree of
separation of foreground and background likelihoods (the easier
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality
of saliency maps. Moreover, the proposed method can be applied
to both unsupervised and supervised scenarios, unlike other
methods which are particularly designed for one scenario only.
Experimental results demonstrate superior performance of the
proposed method compared to the state-of-the-art methods.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
1. The document presents an image segmentation algorithm that uses local thresholding in the YCbCr color space.
2. It computes local thresholds for each pixel by calculating the mean and standard deviation of neighboring pixels in a 3x3 mask. The threshold is used to label each pixel as 1 or 0.
3. The algorithm was tested on images with objects indistinct and distinct from the background. It performed well in segmenting objects from the background in both cases. There is potential to improve performance for blurred images.
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
1) The document presents a method for image compression that combines linear vector quantization and discrete wavelet transform.
2) Linear vector quantization is used to generate codebooks and encode image blocks, achieving better PSNR and MSE than self-organizing maps.
3) The encoded blocks are then subjected to discrete wavelet transform. Low-low subbands are stored for reconstruction while other subbands are discarded.
4) Experimental results show the proposed method achieves higher PSNR and lower MSE than existing techniques, preserving both texture and edge information.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. Experiments were conducted on biometric image databases to compress and reconstruct images using these four techniques, and the results were evaluated and compared based on the above-mentioned metrics.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
An Improved Adaptive Steganographic Method Based on Least Significant Bit Sub...IOSRJVSP
This paper presents a novel technique for improved data embedding in cover images based on least significant bit and pixel-value differencing. The proposed method is based on the properties of human visual system i.e. eyes can tolerate larger changes in edge areas as compared to smooth areas. Therefore, the method utilizes the HVS concept and hides large amount of secret data in edge areas while less amount of data in smooth areas. The results of the proposed method are verified using extensive simulations.
A Novel PSNR-B Approach for Evaluating the Quality of De-blocked Images IOSR Journals
This document discusses evaluating the quality of deblocked images using different quality assessment metrics. It proposes a new metric called PSNR-B that includes a blocking effect factor in PSNR calculations. The document compares PSNR-B to PSNR and SSIM metrics. It studies the effect of quantization step size on measured image quality and analyzes how deblocking algorithms like lowpass filtering can reduce blocking artifacts but also introduce new distortions. Simulation results show PSNR-B correlates better than PSNR with subjective quality judgments of deblocked images.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
Pixel Recursive Super Resolution.
Ryan Dahl, Mohammad Norouzi & Jonathon Shlens
Google Brain.
Abstract
We present a pixel recursive super resolution model that
synthesizes realistic details into images while enhancing
their resolution. A low resolution image may correspond
to multiple plausible high resolution images, thus modeling
the super resolution process with a pixel independent conditional
model often results in averaging different details–
hence blurry edges. By contrast, our model is able to represent
a multimodal conditional distribution by properly modeling
the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We
employ a PixelCNN architecture to define a strong prior
over natural images and jointly optimize this prior with a
deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
3 d discrete cosine transform for image compressionAlexander Decker
1. The document discusses 3D discrete cosine transform (DCT) for image compression. 3D-DCT takes a sequence of frames and divides them into 8x8x8 cubes.
2. Each cube is independently encoded using 3D-DCT, quantization, and entropy encoding. This concentrates information in low frequencies.
3. The technique achieves a compression ratio of around 27 for a set of 8 frames, better than 2D JPEG. By exploiting correlations in both spatial and temporal dimensions, 3D-DCT allows for improved compression over 2D transforms.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
The document presents a method for detecting copy-move forgery in digital images using center-symmetric local binary pattern (CS-LBP). The key steps are:
1. The input image is converted to grayscale and divided into overlapping blocks.
2. CS-LBP features are extracted from each block to represent textures while being invariant to illumination and rotation. This reduces the feature dimensions compared to local binary pattern (LBP).
3. The distances between feature vectors are calculated and sorted lexicographically to group similar blocks together. Distances below a threshold indicate copied regions.
4. Post-processing with morphological operations fills holes and removes outliers to generate a mask of the forged regions.
The
Qcce quality constrained co saliency estimation for common object detectionKoteswar Rao Jerripothula
Despite recent advances in joint processing of images,
sometimes it may not be as effective as single image
processing for object discovery problems. In this paper while
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained
Co-saliency Estimation method. The approach here is to iteratively
update the saliency maps through co-saliency estimation
depending upon quality scores, which indicate the degree of
separation of foreground and background likelihoods (the easier
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality
of saliency maps. Moreover, the proposed method can be applied
to both unsupervised and supervised scenarios, unlike other
methods which are particularly designed for one scenario only.
Experimental results demonstrate superior performance of the
proposed method compared to the state-of-the-art methods.
Object Shape Representation by Kernel Density Feature Points Estimator cscpconf
This paper introduces an object shape representation using Kernel Density Feature Points
Estimator (KDFPE). In this method we obtain the density of feature points within defined rings
around the centroid of the image. The Kernel Density Feature Points Estimator is then applied to
the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of
image representation shows improved retrieval rate when compared to Density Histogram
Feature Points (DHFP) method. Analytic analysis is done to justify our method and we compared our results with object shape representation by the Density Histogram of Feature Points (DHFP) to prove its robustness.
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
This document reviews different techniques for fractal image compression to enhance compression ratio while maintaining image quality. It discusses algorithms like quadtree partitioning with Huffman coding (QPHC), discrete cosine transform based fractal image compression (DCT-FIC), discrete wavelet transform based fractal image compression (DWTFIC), and Grover's quantum search algorithm based fractal image compression (QAFIC). The document also analyzes works applying these techniques and concludes that combining QAFIC with the tiny block size processing algorithm may further improve compression ratio with minimal quality loss.
1. The document presents an image segmentation algorithm that uses local thresholding in the YCbCr color space.
2. It computes local thresholds for each pixel by calculating the mean and standard deviation of neighboring pixels in a 3x3 mask. The threshold is used to label each pixel as 1 or 0.
3. The algorithm was tested on images with objects indistinct and distinct from the background. It performed well in segmenting objects from the background in both cases. There is potential to improve performance for blurred images.
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
1) The document presents a method for image compression that combines linear vector quantization and discrete wavelet transform.
2) Linear vector quantization is used to generate codebooks and encode image blocks, achieving better PSNR and MSE than self-organizing maps.
3) The encoded blocks are then subjected to discrete wavelet transform. Low-low subbands are stored for reconstruction while other subbands are discarded.
4) Experimental results show the proposed method achieves higher PSNR and lower MSE than existing techniques, preserving both texture and edge information.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
Performance Analysis of Compression Techniques Using SVD, BTC, DCT and GPIOSR Journals
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. The techniques are tested on biometric images from iris, fingerprint, and palm print databases to evaluate image quality after compression.
This document summarizes and compares four different image compression techniques: Singular Value Decomposition (SVD), Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), and Gaussian Pyramid (GP). It analyzes the performance of each technique based on metrics like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and compression ratio. Experiments were conducted on biometric image databases to compress and reconstruct images using these four techniques, and the results were evaluated and compared based on the above-mentioned metrics.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
This document discusses parallel processing and compound image compression techniques. It examines the computational complexity and quantitative optimization of various image compression algorithms like BTC, DCT, DWT, DTCWT, SPIHT and EZW. The performance is evaluated in terms of coding efficiency, memory usage, image quality and quantity. Block Truncation Coding and Discrete Cosine Transform compression methods are described in more detail.
A robust combination of dwt and chaotic function for image watermarkingijctet
This document summarizes a research paper on a robust image watermarking technique that combines discrete wavelet transform (DWT) and a chaotic function. The proposed method embeds a watermark into selected blocks of the low-frequency DWT subband of an image. It calculates the Euclidean distance between blocks of the watermark and image to select the most similar block for embedding. Experimental results on standard test images show the proposed method achieves better performance than previous methods in terms of PSNR and structural similarity under compression attacks. The extraction accuracy remains high even with noise attacks, though it degrades more under filtering attacks.
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This document presents a pipelined architecture for 2D discrete cosine transform (DCT), quantization, and zigzag processing for JPEG image compression implemented on an FPGA. The 2D-DCT is computed using separability into two 1D-DCTs separated by a transpose buffer. A quantizer divides each DCT coefficient by a value from a quantization table stored in ROM. A zigzag buffer reorders the coefficients in zigzag format. The design was synthesized to Xilinx Spartan-3E FPGA and achieved a maximum frequency of 101.35MHz, processing an 8x8 block in 6604ns with a pipeline latency of 140 cycles.
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
The document presents a new efficient color image compression technique that aims to improve the quality of decompressed images while achieving higher compression ratios. It does this by compressing important edge parts of the image differently than non-edge background parts. Specifically, it applies low-quality lossy compression to non-edge parts and high-quality lossy compression to edge parts. The technique uses edge detection, adaptive thresholding based on local variance and mean, and discrete cosine transform followed by quantization and entropy encoding. Experimental results on various images show it achieves better compression ratios, lower bit rates, and higher peak signal to noise ratios compared to non-adaptive methods.
A NOVEL ALGORITHM FOR IMAGE DENOISING USING DT-CWT sipij
This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on
Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding - softclustering technique. The clustering techniques classify the noisy and image pixels based on the
neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the
proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with
RMSE to assess the quality of denoised images.
An improved image compression algorithm based on daubechies wavelets with ar...Alexander Decker
This document summarizes an academic article that proposes a new image compression algorithm using Daubechies wavelets and arithmetic coding. It first discusses existing image compression techniques and their limitations. It then describes the proposed algorithm, which applies Daubechies wavelet transform followed by 2D Walsh wavelet transform on image blocks and arithmetic coding. Results show the proposed method achieves higher compression ratios and PSNR values than existing algorithms like EZW and SPIHT. Future work aims to improve results by exploring different wavelets and compression techniques.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
This document discusses techniques for image segmentation and edge detection. It proposes a generalized boundary detection method called Gb that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation is also introduced to improve boundary detection accuracy with minimal extra computation. Common methods for edge detection are described, including gradient-based, texture-based, and projection profile-based approaches. Improved Harris and corner detection algorithms are presented to more accurately detect edges and corners. The output of Gb using soft segmentations as input is shown to correlate well with occlusions and whole object boundaries while capturing general boundaries.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
This document discusses boundary detection techniques for images. It proposes a generalized boundary detection method (Gb) that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation and contour grouping methods are also introduced to further improve boundary detection accuracy with minimal extra computation. The document presents outputs of Gb on sample images and concludes that Gb effectively detects boundaries in a principled manner by jointly resolving constraints from multiple image interpretation layers in closed form.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
Sinusoidal Function for Population Size in Quantum Evolutionary Algorithm and...sipij
Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems. Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilistic representation for solutions and is highly suitable for combinatorial problems like Knapsack problem. Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for this kind of problems yet. This paper improves QEA whit change population size and used it in fractal image compression. Utilizing the self-similarity property of a natural image, the partitioned iterated function system (PIFS) will be found to encode an image through Quantum Evolutionary Algorithm (QEA) method Experimental results show that our method has a better performance than GA and conventional fractal image compression algorithms.
Similar to Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images (20)
‘Six Sigma Technique’ A Journey Through its Implementationijtsrd
The manufacturing industries all over the world are facing tough challenges for growth, development and sustainability in today’s competitive environment. They have to achieve apex position by adapting with the global competitive environment by delivering goods and services at low cost, prime quality and better price to increase wealth and consumer satisfaction. Cost Management ensures profit, growth and sustainability of the business with implementation of Continuous Improvement Technique like Six Sigma. This leads to optimize Business performance. The method drives for customer satisfaction, low variation, reduction in waste and cycle time resulting into a competitive advantage over other industries which did not implement it. The main objective of this paper ‘Six Sigma Technique A Journey Through Its Implementation’ is to conceptualize the effectiveness of Six Sigma Technique through the journey of its implementation. Aditi Sunilkumar Ghosalkar "‘Six Sigma Technique’: A Journey Through its Implementation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64546.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64546/‘six-sigma-technique’-a-journey-through-its-implementation/aditi-sunilkumar-ghosalkar
Edge Computing in Space Enhancing Data Processing and Communication for Space...ijtsrd
Edge computing, a paradigm that involves processing data closer to its source, has gained significant attention for its potential to revolutionize data processing and communication in space missions. With the increasing complexity and data volume generated by modern space missions, traditional centralized computing approaches face challenges related to latency, bandwidth, and security. Edge computing in space, involving on board processing and analysis of data, offers promising solutions to these challenges. This paper explores the concept of edge computing in space, its benefits, applications, and future prospects in enhancing space missions. Manish Verma "Edge Computing in Space: Enhancing Data Processing and Communication for Space Missions" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64541.pdf Paper Url: https://www.ijtsrd.com/computer-science/artificial-intelligence/64541/edge-computing-in-space-enhancing-data-processing-and-communication-for-space-missions/manish-verma
Dynamics of Communal Politics in 21st Century India Challenges and Prospectsijtsrd
Communal politics in India has evolved through centuries, weaving a complex tapestry shaped by historical legacies, colonial influences, and contemporary socio political transformations. This research comprehensively examines the dynamics of communal politics in 21st century India, emphasizing its historical roots, socio political dynamics, economic implications, challenges, and prospects for mitigation. The historical perspective unravels the intricate interplay of religious identities and power dynamics from ancient civilizations to the impact of colonial rule, providing insights into the evolution of communalism. The socio political dynamics section delves into the contemporary manifestations, exploring the roles of identity politics, socio economic disparities, and globalization. The economic implications section highlights how communal politics intersects with economic issues, perpetuating disparities and influencing resource allocation. Challenges posed by communal politics are scrutinized, revealing multifaceted issues ranging from social fragmentation to threats against democratic values. The prospects for mitigation present a multifaceted approach, incorporating policy interventions, community engagement, and educational initiatives. The paper conducts a comparative analysis with international examples, identifying common patterns such as identity politics and economic disparities. It also examines unique challenges, emphasizing Indias diverse religious landscape, historical legacy, and secular framework. Lessons for effective strategies are drawn from international experiences, offering insights into inclusive policies, interfaith dialogue, media regulation, and global cooperation. By scrutinizing historical epochs, contemporary dynamics, economic implications, and international comparisons, this research provides a comprehensive understanding of communal politics in India. The proposed strategies for mitigation underscore the importance of a holistic approach to foster social harmony, inclusivity, and democratic values. Rose Hossain "Dynamics of Communal Politics in 21st Century India: Challenges and Prospects" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64528.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/history/64528/dynamics-of-communal-politics-in-21st-century-india-challenges-and-prospects/rose-hossain
Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in...ijtsrd
Background and Objective Telehealth has become a well known tool for the delivery of health care in Saudi Arabia, and the perspective and knowledge of healthcare providers are influential in the implementation, adoption and advancement of the method. This systematic review was conducted to examine the current literature base regarding telehealth and the related healthcare professional perspective and knowledge in the Kingdom of Saudi Arabia. Materials and Methods This systematic review was conducted by searching 7 databases including, MEDLINE, CINHAL, Web of Science, Scopus, PubMed, PsycINFO, and ProQuest Central. Studies on healthcare practitioners telehealth knowledge and perspectives published in English in Saudi Arabia from 2000 to 2023 were included. Boland directed this comprehensive review. The researchers examined each connected study using the AXIS tool, which evaluates cross sectional systematic reviews. Narrative synthesis was used to summarise and convey the data. Results Out of 1840 search results, 10 studies were included. Positive outlook and limited knowledge among providers were seen across trials. Healthcare professionals like telehealth for its ability to improve quality, access, and delivery, save time and money, and be successful. Age, gender, occupation, and work experience also affect health workers knowledge. In Saudi Arabia, healthcare professionals face inadequate expert assistance, patient privacy, internet connection concerns, lack of training courses, lack of telehealth understanding, and high costs while performing telemedicine. Conclusions Healthcare practitioners telehealth perceptions and knowledge were examined in this systematic study. Its collection of concerned experts different personal attitudes and expertise would help enhance telehealths implementation in Saudi Arabia, develop its healthcare delivery alternative, and eliminate frequent problems. Badriah Mousa I Mulayhi | Dr. Jomin George | Judy Jenkins "Assess Perspective and Knowledge of Healthcare Providers Towards Elehealth in Saudi Arabia: A Systematic Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64535.pdf Paper Url: https://www.ijtsrd.com/medicine/other/64535/assess-perspective-and-knowledge-of-healthcare-providers-towards-elehealth-in-saudi-arabia-a-systematic-review/badriah-mousa-i-mulayhi
The Impact of Digital Media on the Decentralization of Power and the Erosion ...ijtsrd
The impact of digital media on the distribution of power and the weakening of traditional gatekeepers has gained considerable attention in recent years. The adoption of digital technologies and the internet has resulted in declining influence and power for traditional gatekeepers such as publishing houses and news organizations. Simultaneously, digital media has facilitated the emergence of new voices and players in the media industry. Digital medias impact on power decentralization and gatekeeper erosion is visible in several ways. One significant aspect is the democratization of information, which enables anyone with an internet connection to publish and share content globally, leading to citizen journalism and bypassing traditional gatekeepers. Another aspect is the disruption of conventional media industry business models, as traditional organizations struggle to adjust to the decrease in advertising revenue and the rise of digital platforms. Alternative business models, such as subscription models and crowdfunding, have become more prevalent, leading to the emergence of new players. Overall, the impact of digital media on the distribution of power and the weakening of traditional gatekeepers has brought about significant changes in the media landscape and the way information is shared. Further research is required to fully comprehend the implications of these changes and their impact on society. Dr. Kusum Lata "The Impact of Digital Media on the Decentralization of Power and the Erosion of Traditional Gatekeepers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64544.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64544/the-impact-of-digital-media-on-the-decentralization-of-power-and-the-erosion-of-traditional-gatekeepers/dr-kusum-lata
Online Voices, Offline Impact Ambedkars Ideals and Socio Political Inclusion ...ijtsrd
This research investigates the nexus between online discussions on Dr. B.R. Ambedkars ideals and their impact on social inclusion among college students in Gurugram, Haryana. Surveying 240 students from 12 government colleges, findings indicate that 65 actively engage in online discussions, with 80 demonstrating moderate to high awareness of Ambedkars ideals. Statistically significant correlations reveal that higher online engagement correlates with increased awareness p 0.05 and perceived social inclusion. Variations across colleges and a notable effect of college type on perceived social inclusion highlight the influence of contextual factors. Furthermore, the intersectional analysis underscores nuanced differences based on gender, caste, and socio economic status. Dr. Kusum Lata "Online Voices, Offline Impact: Ambedkar's Ideals and Socio-Political Inclusion - A Study of Gurugram District" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64543.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/political-science/64543/online-voices-offline-impact-ambedkars-ideals-and-sociopolitical-inclusion--a-study-of-gurugram-district/dr-kusum-lata
Problems and Challenges of Agro Entreprenurship A Studyijtsrd
Noting calls for contextualizing Agro entrepreneurs problems and challenges of the agro entrepreneurs and for greater attention to the Role of entrepreneurs in agro entrepreneurship research, we conduct a systematic literature review of extent research in agriculture entrepreneurship to overcome the study objectives of complications of agro entrepreneurs through various factors, Development of agriculture products is a key factor for the overall economic growth of agro entrepreneurs Agro Entrepreneurs produces firsthand large scale employment, utilizes the labor and natural resources, This research outlines the problems of Weather and Soil Erosions, Market price fluctuation, stimulates labor cost problems, reduces concentration of Price volatility, Dependency on Intermediaries, induces Limited Bargaining Power, and Storage and Transportation Costs. This paper mainly devoted to highlight Problems and challenges faced for the sustainable of Agro Entrepreneurs in India. Vinay Prasad B "Problems and Challenges of Agro Entreprenurship - A Study" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64540.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64540/problems-and-challenges-of-agro-entreprenurship--a-study/vinay-prasad-b
Comparative Analysis of Total Corporate Disclosure of Selected IT Companies o...ijtsrd
Disclosure is a process through which a business enterprise communicates with external parties. A corporate disclosure is communication of financial and non financial information of the activities of a business enterprise to the interested entities. Corporate disclosure is done through publishing annual reports. So corporate disclosure through annual reports plays a vital role in the life of all the companies and provides valuable information to investors. The basic objectives of corporate disclosure is to give a true and fair view of companies to the parties related either directly or indirectly like owner, government, creditors, shareholders etc. in the companies act, provisions have been made about mandatory and voluntary disclosure. The IT sector in India is rapidly growing, the trend to invest in the IT sector is rising and employment opportunities in IT sectors are also increasing. Therefore the IT sector is expected to have fair, full and adequate disclosure of all information. Unfair and incomplete disclosure may adversely affect the entire economy. A research study on disclosure practices of IT companies could play an important role in this regard. Hence, the present research study has been done to study and review comparative analysis of total corporate disclosure of selected IT companies of India and to put forward overall findings and suggestions with a view to increase disclosure score of these companies. The researcher hopes that the present research study will be helpful to all selected Companies for improving level of corporate disclosure through annual reports as well as the government, creditors, investors, all business organizations and upcoming researcher for comparative analyses of level of corporate disclosure with special reference to selected IT companies. Dr. Vaibhavi D. Thaker "Comparative Analysis of Total Corporate Disclosure of Selected IT Companies of India" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64539.pdf Paper Url: https://www.ijtsrd.com/other-scientific-research-area/other/64539/comparative-analysis-of-total-corporate-disclosure-of-selected-it-companies-of-india/dr-vaibhavi-d-thaker
The Impact of Educational Background and Professional Training on Human Right...ijtsrd
This study investigated the impact of educational background and professional training on human rights awareness among secondary school teachers in the Marathwada region of Maharashtra, India. The key findings reveal that higher levels of education, particularly a master’s degree, and fields of study related to education, humanities, or social sciences are associated with greater human rights awareness among teachers. Additionally, both pre service teacher training and in service professional development programs focused on human rights education significantly enhance teacher’s knowledge, skills, and competencies in promoting human rights principles in their classrooms. Baig Ameer Bee Mirza Abdul Aziz | Dr. Syed Azaz Ali Amjad Ali "The Impact of Educational Background and Professional Training on Human Rights Awareness among Secondary School Teachers" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64529.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64529/the-impact-of-educational-background-and-professional-training-on-human-rights-awareness-among-secondary-school-teachers/baig-ameer-bee-mirza-abdul-aziz
A Study on the Effective Teaching Learning Process in English Curriculum at t...ijtsrd
“One Language sets you in a corridor for life. Two languages open every door along the way” Frank Smith English as a foreign language or as a second language has been ruling in India since the period of Lord Macaulay. But the question is how much we teach or learn English properly in our culture. Is there any scope to use English as a language rather than a subject How much we learn or teach English without any interference of mother language specially in the classroom teaching learning scenario in West Bengal By considering all these issues the researcher has attempted in this article to focus on the effective teaching learning process comparing to other traditional strategies in the field of English curriculum at the secondary level to investigate whether they fulfill the present teaching learning requirements or not by examining the validity of the present curriculum of English. The purpose of this study is to focus on the effectiveness of the systematic, scientific, sequential and logical transaction of the course between the teachers and the learners in the perspective of the 5Es programme that is engage, explore, explain, extend and evaluate. Sanchali Mondal | Santinath Sarkar "A Study on the Effective Teaching Learning Process in English Curriculum at the Secondary Level of West Bengal" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd62412.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/62412/a-study-on-the-effective-teaching-learning-process-in-english-curriculum-at-the-secondary-level-of-west-bengal/sanchali-mondal
The Role of Mentoring and Its Influence on the Effectiveness of the Teaching ...ijtsrd
This paper reports on a study which was conducted to investigate the role of mentoring and its influence on the effectiveness of the teaching of Physics in secondary schools in the South West Region of Cameroon. The study adopted the convergent parallel mixed methods design, focusing on respondents in secondary schools in the South West Region of Cameroon. Both quantitative and qualitative data were collected, analysed separately, and the results were compared to see if the findings confirm or disconfirm each other. The quantitative analysis found that majority of the respondents 72 of Physics teachers affirmed that they had more experienced colleagues as mentors to help build their confidence, improve their teaching, and help them improve their effectiveness and efficiency in guiding learners’ achievements. Only 28 of the respondents disagreed with these statements. With majority respondents 72 agreeing with the statements, it implies that in most secondary schools, experienced Physics teachers act as mentors to build teachers’ confidence in teaching and improving students’ learning. The interview qualitative data analysis summarized how secondary school Principals use meetings with mentors and mentees to promote mentorship in the school milieu. This has helped strengthen teachers’ classroom practices in secondary schools in the South West Region of Cameroon. With the results confirming each other, the study recommends that mentoring should focus on helping teachers employ social interactions and instructional practices feedback and clarity in teaching that have direct measurable impact on students’ learning achievements. Andrew Ngeim Sumba | Frederick Ebot Ashu | Peter Agborbechem Tambi "The Role of Mentoring and Its Influence on the Effectiveness of the Teaching of Physics in Secondary Schools in the South West Region of Cameroon" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64524.pdf Paper Url: https://www.ijtsrd.com/management/management-development/64524/the-role-of-mentoring-and-its-influence-on-the-effectiveness-of-the-teaching-of-physics-in-secondary-schools-in-the-south-west-region-of-cameroon/andrew-ngeim-sumba
Design Simulation and Hardware Construction of an Arduino Microcontroller Bas...ijtsrd
This study primarily focuses on the design of a high side buck converter using an Arduino microcontroller. The converter is specifically intended for use in DC DC applications, particularly in standalone solar PV systems where the PV output voltage exceeds the load or battery voltage. To evaluate the performance of the converter, simulation experiments are conducted using Proteus Software. These simulations provide insights into the input and output voltages, currents, powers, and efficiency under different state of charge SoC conditions of a 12V,70Ah rechargeable lead acid battery. Additionally, the hardware design of the converter is implemented, and practical data is collected through operation, monitoring, and recording. By comparing the simulation results with the practical results, the efficiency and performance of the designed converter are assessed. The findings indicate that while the buck converter is suitable for practical use in standalone PV systems, its efficiency is compromised due to a lower output current. Chan Myae Aung | Dr. Ei Mon "Design Simulation and Hardware Construction of an Arduino-Microcontroller Based DC-DC High-Side Buck Converter for Standalone PV System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64518.pdf Paper Url: https://www.ijtsrd.com/engineering/mechanical-engineering/64518/design-simulation-and-hardware-construction-of-an-arduinomicrocontroller-based-dcdc-highside-buck-converter-for-standalone-pv-system/chan-myae-aung
Sustainable Energy by Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadikuijtsrd
Energy becomes sustainable if it meets the needs of the present without compromising the ability of future generations to meet their own needs. Some of the definitions of sustainable energy include the considerations of environmental aspects such as greenhouse gas emissions, social, and economic aspects such as energy poverty. Generally far more sustainable than fossil fuel are renewable energy sources such as wind, hydroelectric power, solar, and geothermal energy sources. Worthy of note is that some renewable energy projects, like the clearing of forests to produce biofuels, can cause severe environmental damage. The sustainability of nuclear power which is a low carbon source is highly debated because of concerns about radioactive waste, nuclear proliferation, and accidents. The switching from coal to natural gas has environmental benefits, including a lower climate impact, but could lead to delay in switching to more sustainable options. “Carbon capture and storage” can be built into power plants to remove the carbon dioxide CO2 emissions, but this technology is expensive and has rarely been implemented. Leading non renewable energy sources around the world is fossil fuels, coal, petroleum, and natural gas. Nuclear energy is usually considered another non renewable energy source, although nuclear energy itself is a renewable energy source, but the material used in nuclear power plants is not. The paper addresses the issue of sustainable energy, its attendant benefits to the future generation, and humanity in general. Paul A. Adekunte | Matthew N. O. Sadiku | Janet O. Sadiku "Sustainable Energy" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64534.pdf Paper Url: https://www.ijtsrd.com/engineering/electrical-engineering/64534/sustainable-energy/paul-a-adekunte
Concepts for Sudan Survey Act Implementations Executive Regulations and Stand...ijtsrd
This paper aims to outline the executive regulations, survey standards, and specifications required for the implementation of the Sudan Survey Act, and for regulating and organizing all surveying work activities in Sudan. The act has been discussed for more than 5 years. The Land Survey Act was initiated by the Sudan Survey Authority and all official legislations were headed by the Sudan Ministry of Justice till it was issued in 2022. The paper presents conceptual guidelines to be used for the Survey Act implementation and to regulate the survey work practice, standardizing the field surveys, processing, quality control, procedures, and the processes related to survey work carried out by the stakeholders and relevant authorities in Sudan. The conceptual guidelines are meant to improve the quality and harmonization of geospatial data and to aid decision making processes as well as geospatial information systems. The established comprehensive executive regulations will govern and regulate the implementation of the Sudan Survey Geomatics Act in all surveying and mapping practices undertaken by the Sudan Survey Authority SSA and state local survey departments for public or private sector organizations. The targeted standards and specifications include the reference frame, projection, coordinate systems, and the guidelines and specifications that must be followed in the field of survey work, processes, and mapping products. In the last few decades, there has been a growing awareness of the importance of geomatics activities and measurements on the Earths surface in space and time, together with observing and mapping the changes. In such cases, data must be captured promptly, standardized, and obtained with more accuracy and specified in much detail. The paper will also highlight the current situation in Sudan, the degree to which survey standards are used, the problems encountered, and the errors that arise from not using the standards and survey specifications. Kamal A. A. Sami "Concepts for Sudan Survey Act Implementations - Executive Regulations and Standards" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63484.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63484/concepts-for-sudan-survey-act-implementations--executive-regulations-and-standards/kamal-a-a-sami
Towards the Implementation of the Sudan Interpolated Geoid Model Khartoum Sta...ijtsrd
The discussions between ellipsoid and geoid have invoked many researchers during the recent decades, especially during the GNSS technology era, which had witnessed a great deal of development but still geoid undulation requires more investigations. To figure out a solution for Sudans local geoid, this research has tried to intake the possibility of determining the geoid model by following two approaches, gravimetric and geometrical geoid model determination, by making use of GNSS leveling benchmarks at Khartoum state. The Benchmarks are well distributed in the study area, in which, the horizontal coordinates and the height above the ellipsoid have been observed by GNSS while orthometric heights were carried out using precise leveling. The Global Geopotential Model GGM represented in EGM2008 has been exploited to figure out the geoid undulation at the benchmarks in the study area. This is followed by a fitting process, that has been done to suit the geoid undulation data which has been computed using GNSS leveling data and geoid undulation inspired by the EGM2008. Two geoid surfaces were created after the fitting process to ensure that they are identical and both of them could be counted for getting the same geoid undulation with an acceptable accuracy. In this respect, statistical operation played an important role in ensuring the consistency and integrity of the model by applying cross validation techniques splitting the data into training and testing datasets for building the geoid model and testing its eligibility. The geometrical solution for geoid undulation computation has been utilized by applying straightforward equations that facilitate the calculation of the geoid undulation directly through applying statistical techniques for the GNSS leveling data of the study area to get the common equation parameters values that could be utilized to calculate geoid undulation of any position in the study area within the claimed accuracy. Both systems were checked and proved eligible to be used within the study area with acceptable accuracy which may contribute to solving the geoid undulation problem in the Khartoum area, and be further generalized to determine the geoid model over the entire country, and this could be considered in the future, for regional and continental geoid model. Ahmed M. A. Mohammed. | Kamal A. A. Sami "Towards the Implementation of the Sudan Interpolated Geoid Model (Khartoum State Case Study)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63483.pdf Paper Url: https://www.ijtsrd.com/engineering/civil-engineering/63483/towards-the-implementation-of-the-sudan-interpolated-geoid-model-khartoum-state-case-study/ahmed-m-a-mohammed
Activating Geospatial Information for Sudans Sustainable Investment Mapijtsrd
Sudan is witnessing an acceleration in the processes of development and transformation in the performance of government institutions to raise the productivity and investment efficiency of the government sector. The development plans and investment opportunities have focused on achieving national goals in various sectors. This paper aims to illuminate the path to the future and provide geospatial data and information to develop the investment climate and environment for all sized businesses, and to bridge the development gap between the Sudan states. The Sudan Survey Authority SSA is the main advisor to the Sudan Government in conducting surveying, mappings, designing, and developing systems related to geospatial data and information. In recent years, SSA made a strategic partnership with the Ministry of Investment to activate Geospatial Information for Sudans Sustainable Investment and in particular, for the preparation and implementation of the Sudan investment map, based on the directives and objectives of the Ministry of Investment MI in Sudan. This paper comes within the framework of activating the efforts of the Ministry of Investment to develop technical investment services by applying techniques adopted by the Ministry and its strategic partners for advancing investment processes in the country. Kamal A. A. Sami "Activating Geospatial Information for Sudan's Sustainable Investment Map" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63482.pdf Paper Url: https://www.ijtsrd.com/engineering/information-technology/63482/activating-geospatial-information-for-sudans-sustainable-investment-map/kamal-a-a-sami
Educational Unity Embracing Diversity for a Stronger Societyijtsrd
In a rapidly changing global landscape, the importance of education as a unifying force cannot be overstated. This paper explores the crucial role of educational unity in fostering a stronger and more inclusive society through the embrace of diversity. By examining the benefits of diverse learning environments, the paper aims to highlight the positive impact on societal strength. The discussion encompasses various dimensions, from curriculum design to classroom dynamics, and emphasizes the need for educational institutions to become catalysts for unity in diversity. It highlights the need for a paradigm shift in educational policies, curricula, and pedagogical approaches to ensure that they are reflective of the diverse fabric of society. This paper also addresses the challenges associated with implementing inclusive educational practices and offers practical strategies for overcoming barriers. It advocates for collaborative efforts between educational institutions, policymakers, and communities to create a supportive ecosystem that promotes diversity and unity. Mr. Amit Adhikari | Madhumita Teli | Gopal Adhikari "Educational Unity: Embracing Diversity for a Stronger Society" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64525.pdf Paper Url: https://www.ijtsrd.com/humanities-and-the-arts/education/64525/educational-unity-embracing-diversity-for-a-stronger-society/mr-amit-adhikari
Integration of Indian Indigenous Knowledge System in Management Prospects and...ijtsrd
The diversity of indigenous knowledge systems in India is vast and can vary significantly between different communities and regions. Preserving and respecting these knowledge systems is crucial for maintaining cultural heritage, promoting sustainable practices, and fostering cross cultural understanding. In this paper, an overview of the prospects and challenges associated with incorporating Indian indigenous knowledge into management is explored. It is found that IIKS helps in management in many areas like sustainable development, tourism, food security, natural resource management, cultural preservation and innovation, etc. However, IIKS integration with management faces some challenges in the form of a lack of documentation, cultural sensitivity, language barriers legal framework, etc. Savita Lathwal "Integration of Indian Indigenous Knowledge System in Management: Prospects and Challenges" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63500.pdf Paper Url: https://www.ijtsrd.com/management/accounting-and-finance/63500/integration-of-indian-indigenous-knowledge-system-in-management-prospects-and-challenges/savita-lathwal
DeepMask Transforming Face Mask Identification for Better Pandemic Control in...ijtsrd
The COVID 19 pandemic has highlighted the crucial need of preventive measures, with widespread use of face masks being a key method for slowing the viruss spread. This research investigates face mask identification using deep learning as a technological solution to be reducing the risk of coronavirus transmission. The proposed method uses state of the art convolutional neural networks CNNs and transfer learning to automatically recognize persons who are not wearing masks in a variety of circumstances. We discuss how this strategy improves public health and safety by providing an efficient manner of enforcing mask wearing standards. The report also discusses the obstacles, ethical concerns, and prospective applications of face mask detection systems in the ongoing fight against the pandemic. Dilip Kumar Sharma | Aaditya Yadav "DeepMask: Transforming Face Mask Identification for Better Pandemic Control in the COVID-19 Era" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd64522.pdf Paper Url: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/64522/deepmask-transforming-face-mask-identification-for-better-pandemic-control-in-the-covid19-era/dilip-kumar-sharma
Streamlining Data Collection eCRF Design and Machine Learningijtsrd
Efficient and accurate data collection is paramount in clinical trials, and the design of Electronic Case Report Forms eCRFs plays a pivotal role in streamlining this process. This paper explores the integration of machine learning techniques in the design and implementation of eCRFs to enhance data collection efficiency. We delve into the synergies between eCRF design principles and machine learning algorithms, aiming to optimize data quality, reduce errors, and expedite the overall data collection process. The application of machine learning in eCRF design brings forth innovative approaches to data validation, anomaly detection, and real time adaptability. This paper discusses the benefits, challenges, and future prospects of leveraging machine learning in eCRF design for streamlined and advanced data collection in clinical trials. Dhanalakshmi D | Vijaya Lakshmi Kannareddy "Streamlining Data Collection: eCRF Design and Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-8 | Issue-1 , February 2024, URL: https://www.ijtsrd.com/papers/ijtsrd63515.pdf Paper Url: https://www.ijtsrd.com/biological-science/biotechnology/63515/streamlining-data-collection-ecrf-design-and-machine-learning/dhanalakshmi-d
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
How Barcodes Can Be Leveraged Within Odoo 17Celine George
In this presentation, we will explore how barcodes can be leveraged within Odoo 17 to streamline our manufacturing processes. We will cover the configuration steps, how to utilize barcodes in different manufacturing scenarios, and the overall benefits of implementing this technology.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
2. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD33361 | Volume – 4 | Issue – 6 | September-October 2020 Page 270
of some shifted image blocks. This DCT- domain method
provides DCT transform domain solution to reduce the
effects of blocking artifacts. However, loss of edge
information caused by the zero-masking scheme is
noticeable. Luo and Ward [8] and Singh et al. [9] gave a
new approach which preserved the edge information.
These methods are based on reducing the blocking artifacts
in the smooth regions of the image. The correlation
between the intensity values of the boundary pixels of two
neighboring blocks in the DCT domain is used to
distinguish between smooth and non-smooth regions.
A recent artifact removal scheme is proposed by Kim and
Sim [11] based on adaptive adjustment of weights
according to the directional correlation and activities of
local areas. The proposed approach for the spatial
smoothing applies the Signal Adaptive Weighted Sum
(SAWS) technique to the current pixel and boundary pixels.
Moreover, a Strength Parameter (SP) is applied to weights
to avoid excessive blurring in the detailed area. The
deblocking process is weak in the detailed area however
strong in smooth area.
The rest of paper proceeds as follows: Section I describes
the General model of blocking artifacts. In Section II the
discussion on basics of fuzzy logic is presented. Section III
presents the details about Gaussian membership based on
fuzzy classification of blocks. Section IV presents proposed
(gFAR) method of fuzzy blocking artifact removal and
section V concludes results and discussion.
II. GENERAL MODEL OF BLOCKING ARTIFACTS
AND ITS IMPLEMENTATION [10]
Low bit rate quantizes most of the DCT coefficients to
zeros. The blocking effects in the horizontal and vertical
directions generally impact the image. Therefore, the
formulation of blocking artifacts is as follows:
Suppose i1 and i2 are image values of two pixels adjacent to
each other in same row or column, but are in different
blocks. It is assumed that blockiness of the compressed
image is related to the fact that before compression the
values of i1 and i2 were usually similar, but the quantization
makes the values more different. Hence edge variance E is
defined as the sum of the squared differences for all such
pixel pairs.
2
21 iiE (1)
The block edge variance E is a measure of image
blockiness.
The desired value of the edge variance is estimated if the
same measure for the pixels inside the edge on either side
is computed and then average is calculated. If the estimated
value is less than the edge variance, the edge variance is
tried to be reduced, adjusting the edge variance in this way
alters only the edge pixels. The reduction done in the
direction of the gradient of edge variance may not be
completely achieved if the minimum reduction in this
direction is above the next-to-edge variance. The problem
gets reduced at the boundary however; a new problem has
been created inside the blocks. Then the attempt is to
reduce this problem by monitoring a measure of image
roughness in the blocks.
For natural images, each block is modeled as a constant
block distorted by i.i.d (independent, identically distributed
noise), i.e. white noise with zero mean and a small variance
[7]. Consider two adjacent (either vertical or horizontal)
8x8 blocks x1, x2 with its average values as ā and ƃ,
respectively. Thus, these two blocks can be modeled as
follows:
jiax ,1 , jibx ,2 (2)
Where, εi,j and δi,j are modeled as i.i.d. white noise blocks
with zero mean. Hence an overlap block can be composed
of the half of x1, and the half of x2 to constitute a new 8x8
block x as shown in Fig. 1.
Fig. 1: Structure of a new block x, consisting of x1 and x2
(either left and right or up and bottom)
Then, a 2-D step function in the block x defined as follows:
;
2
1st left or bottom half of block x (3)
;
2
1st right or top half of block x (4)
The new block may be modeled as follows:
BstSx * (5)
where, |S| represents the amplitude of 2-D step function st,
B is the average value in the block x, this is represented as
the local background brightness, and μ is a white noise
with zero-mean, defined as the local activity around the
block edge. The larger is the value of |S|, more serious are
the blocking artifacts, if the background brightness and
local activity remains unchanged. If the value of | S | is
estimated the blocking artifacts between two blocks is
measured.
III. UNDERSTANDING BASICS OF FUZZY LOGIC
The concept of fuzzy logic deals with the partial truth or
partial false whereas binary logic concept is of either true
or false. A fuzzy set is a class of objects characterized by a
membership function. Each object is assigned a grade of
membership ranging between 0 and 1[13]. Instead of
having no or complete membership in set, fuzzy set refers
to elements in the set having partial membership. Suppose
an example of fuzzy logic is considered: A set of bright
pixels in gray scale image is to be classified with respect to
3. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD33361 | Volume – 4 | Issue – 6 | September-October 2020 Page 271
dull pixels. When this set is defined in binary logic, the
values more than 150 are classified as bright pixels and
less than 150 as dull pixels. However it is not reasonable to
consider value of 150 as bright pixel and value 149 as dull
pixel. Whereas in contrast with fuzzy logic, values less than
50 is dull and values greater than 150 are bright and values
between 50 and 150 is considered to have partial
membership in bright and partial membership in a dull set.
Fig. 2: Membership functions for set Bright Pixels
IV. GAUSSIAN MEMBERSHIP-BASED FUZZY
CLASSIFICATION OF BLOCKS [10]
A gray-tone image X is considered of R x C dimension. A
histogram-based fuzzy membership function is defined to
represent pixels of the spatial domain image sub-matrices
called block to the fuzzy domain image block
2
2
max
,,
2 f
ij
jiji
xB
exG
(6)
Where, G (xij) is a Gaussian function, and Bmax, xij are the
maximum and (i, j) th gray values respectively in a block
and the fuzzifier σf. The fuzzifier σf defines the width of the
Gaussian function. The only fuzzifier σf can be defined as:
1
0
2
max
41
0
max
2
)(2
)(
L
k
L
k
f
kpkB
kpkB
(7)
Where, k represents certain gray level in the range [0 to L-
1].
Here a fuzzy histogram is obtained as the frequency of
occurrence of membership functions of gray levels in the
fuzzy image. Thus,
;)(),(
,
,
ji
ji
x
xpxX
CjRi .....,3,2,1;.....,3,2,1 (8)
Where, μ(x) is the membership of pixel with intensity value
of x, and p(x) is the number of occurrences of the intensity
value x, in an image X. The distribution of p(x) is
normalized such that
1
0
1)(
L
x
xp (9)
The term p(x) represents the frequency of occurrence of x
in histogram X.
In this theory of fuzzy plane, a contrast-enhanced image
block is said to be low perception (dark) for μ [0, 0.5] or
high perception (bright) for μ [0.5, 1] values. This leaves
pixels near μ = 0.5 having the highest ambiguity and do
not belong to either perception class. Hence these can be
treated as pixels which describe the fuzzy boundary. Thus,
a range of values μ for a block near 0.5 is considered to
contain edges, the range of values between 0.45 - 0.55
requires correction.
V. PROPOSED ALGORITHM
The understanding of general model of Blockiness and the
Gaussian memberships values (described previously) are
used to judge the degree of edginess present in block. Then,
the membership values are found for the restoration on
pixel by pixel basis. The filtering methods proposed by
Fuzzy based approaches uses local information of the
image, these fuzzy filters have good edge preserving
property and apply filtering to the corrupted blocks and its
neighbors [14].
The proposed (Gaussian Fuzzy Artifacts Removal)
algorithm is:
1. Take a highly compressed DCT artifact Image and
obtain an overlapped N x N block out of it.
2. Fed each N x N block to a Fuzzy detector so as to
decide about edge intensity of the blocks based on the
Gaussian based membership function.
3. The membership degree of each pixel in the block is
computed as:
2
2
max
,,
2 f
ij
jiji
xB
exG
(10)
This Gaussian membership function uses mean as the
maximum value of block (Bmax) and the variance σf varies
with block intensity values.
Rule: In general if most of the memberships of these pixels
represent the value around 0.5, then block considered is an
edge block and needs to be corrected.
4. The restoration of central pixel of these selected edge
blocks is performed on the basis of its neighbor
4. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD33361 | Volume – 4 | Issue – 6 | September-October 2020 Page 272
intensity, firstly the maximum absolute intensity
difference of the central pixel about its neighbors is
computed as:
blocklkstjiIjkiID ),(),()1,(maxmax (11)
This maximum absolute intensity difference of the central
pixel about its neighbors: Dmax is used to compute the fuzzy
membership of the central pixel as:
B
BA
BA
A
A
THRD
THRDTHR
THRTHR
THRD
THRD
max
max
max
max
;1
;
;0
; (12)
Where THRA and THRB are two predefined thresholds and
have values around mean of a complete image and needs to
be preset.
5. Finally the restoration of central pixel of the block is
given as:
max*),(*)1().(
~
DjiIjiI (13)
This process is repeated for each overlapped block and
central pixel is corrected as per need.
The block diagram of proposed gFAR algorithm process is described in Fig. 3
Fig 3: Proposed gFAR Algorithm.
VI. SIMULATION RESULTS AND DISCUSSION
These test images are initially compressed at low bit-rates using the baseline JPEG standard and the image blocking artifacts
are generated. To demonstrate the performance of the proposed algorithm experimental results of the proposed method
are presented subjectively and objectively as well as also on the computational load. Experiments were conducted using the
images such as Lena, Peppers and Goldhill of size 256 x 256 with 256 gray levels.
The objective metric used for comparison of original image I and restored image Ĩ is the PSNR (Peak Signal-to-Noise Ratio)
and the MSE (Mean Square Error) value. In general, the larger is PSNR (dB) value; the better is the reconstructed image
quality. The mathematical formulae for the computation of MSE & PSNR is
2
1 1
),(),(
M
m
N
n
decomporg nmpixnmpixMSE (14)
MSE
PSNR
2
10
255
log10 (15)
N x N
overlapped
image blocks
For each image
block obtain
Gaussian fuzzy
memberships μ
i,j of each pixel
to decide central
pixel of block
need correction
Obtain the maximum
Intensity Difference
(Dmax) central pixel
around its neighbor’s
Use the value of Dmax to obtain the fuzzy
membership of central pixel as:
0
1
THRA THRB
Obtain pixel
new value (i ,j)
The restore pixel
Get
Central
pixel
value of
Block
YES
NO
(a)
Or
igi
(b) (c)
5. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD33361 | Volume – 4 | Issue – 6 | September-October 2020 Page 273
(a) Original Image (b) JPEG compressed at Q=1 (c) JPEG compressed at Q=3
(d) gFAR at Q=3 (e) SAWS at Q=2.5 (f) gFAR at Q=2.5
(G) SAWS AT Q=3 (H) JPEG COMPRESSED AT Q=2 (I) GFAR AT Q=2
TABLE I. MSE (MEAN SQUARE ERROR) AND PSNR (PEAK SIGNAL NOISE RATIO) COMPARISON FOR Q = 2
256 X 256
JPEG SAWS [11] with SP gFAR Proposed
PSNR (dB) MSE PSNR (dB) MSE PSNR (dB) MSE
Lena 25.11 200.35 31.24 48.87 42.96 31.80
Goldhill 24.26 243.67 28.65 87.64 28.77 86.36
Peppers 25.22 195.47 31.36 47.55 32.22 39.04
TABLE II. MSE (MEAN SQUARE ERROR) AND PSNR (PEAK SIGNAL NOISE RATIO) COMPARISON FOR Q = 2.5.
256 X 256
JPEG SAWS [11] with SP gFAR Proposed
PSNR (dB) MSE PSNR(dB) MSE PSNR (dB) MSE
Lena 26.11 159.24 30.08 56.07 31.05 51.07
Goldhill 25.45 185.60 28.17 99.06 28.19 98.61
Peppers 24.39 236.09 30.68 55.62 31.36 47.49
TABLE III.MSE (MEAN SQUARE ERROR) AND PSNR (PEAK SIGNAL NOISE RATIO) COMPARISON FOR Q = 3.
256 X 256
JPEG SAWS [11] with SP gFAR Proposed
PSNR (dB) MSE PSNR(dB) MSE PSNR (dB) MSE
Lena 28.50 91.85 30.11 63.32 30.43 58.78
Goldhill 27.93 104.73 27.74 109.48 27.73 109.71
Peppers 28.92 83.58 30.14 62.95 30.66 55.85
From the results it is very clear that, the SAWS based adaptive weight adjustment recent method introduces blurring in the
resulted image whereas the gFAR provides good results over the the exisiting methods.
VII. CONCLUSION
The gFAR algorithm for artifact removal of DCT
compressed images has been introduced in this paper. A
kind of Gaussian membership value based decision is used
to find whether the overlapped block central pixel need
correction. Thus further it is corrected using fuzzified
values of max gradient and original. The results prove that
this fuzzy logic based artifact removal gFAR method gets
effective performance in terms of PSNR, MSE as compared
to several existing techniques to the date.
(d) (e) (f)
(g) (h) (i)
6. International Journal of Trend in Scientific Research and Development (IJTSRD) @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD33361 | Volume – 4 | Issue – 6 | September-October 2020 Page 274
REFERENCES
[1] M. Miyahara, K. Kotani, “Block Distortion in
Orthogonal Transform Coding: Analysis,
Minimization, and Distortion measure,” IEEE Trans.
Communication, COM-33 (1), 1985, pp 90-96.
[2] F. X. Coudoux, M. Gazalet, M. Gharbi, J. C. Kastelik et P.
Corlay, “Image Quality Evaluation based on the
Analysis of the Block-Shaped Distortion,” Proc. IEEE
8th Workshop on IMDSP, Cannes, 8-10 September
1993, pp. 50-51.
[3] Hsu, Y-F. and Chen, Y-C., 1983. A new adaptive
separable median filter for removing blocking effects,
IEEE Transactions on Consumer Electronics, pp.510-
513.
[4] A. Zakhor, “Iterative Procedures for Reduction of
Blocking Effects in Transform Image Coding,” IEEE
Trans. Circuit Syst. Video Technol. 2 1992, pp. 91–95.
[5] Y. Yang, N. P. Galatsanos, “Removal of Compression
Artifacts using Projections onto Convex Sets and Line
Process Modeling,” IEEE Trans. Image Process. 6,
1997, pp. 1345–1357.
[6] M. Hanmandlu. J. See, S. Vasikarla, “Fuzzy Edge
Detector Using Entropy Optimization,” Proceedings of
the International Conference on Information
Technology: Coding and Computing, 2004.
[7] B. Zeng, “Reduction of blocking effect in DCT-coded
images using zero-masking techniques,” Signal
Processing 79, 1999, pp. 205–211.
[8] Y. Luo, R. K. Ward, “Removing the Blocking Artifacts
of Block-Based DCT Compressed Images,” IEEE
Trans. Image Processing, 12, 2003, pp. 838–842.
[9] S. Singh, V. Kumar, H. K. Verma, “Reduction of
blocking artifacts in JPEG Compressed Images,”
Digital Signal Processing, 17, 2007, pp.225–243.
[10] Alan C. Bovik and S. Liu, “DCT-Domain Blind
Measurement of Blocking artifacts in DCT-Coded
Images,” Laboratory for Image and Video
Engineering, Dept. of Electrical and Computer
Engineering, The University of Texas at Austin,
Austin, TX 78712-1084, USA.
[11] J. Kim and Chun-Bo Sim, “Compression Artifacts
Removal by Signal Adaptive Weighted Sum
Technique,” IEEE Transactions on Consumer
Electronics, Vol. 57, No. 4, November 2011, pp 1944-
1951.
[12] Paek, H., Kim, R. C. and Lee, S. U., On the POCS-based
post processing technique to reduce the blocking
artifacts in transform coded images, IEEE Trans.
Circuits Syst. Video Technol., Vol. 8, 1998, pp. 358–
367.
[13] Zadeh, Lotfi A., The role of fuzzy logic in modeling,
Identification and Control, University of California at
Berkeley, Vol. 15, No. 3, 1994, pp. 191-203.
[14] Kong, Hao-Song, 2004. Coding Artifact Reduction
Using Edge Map Guided Adaptive and Fuzzy Filter, A
Mitsubishi Electric Research Laboratory.
[15] Weerasinghe, Chaminda, Liew, Alan Wee-Chung and
Yan, Hong, Artifact Reduction in Compressed Images
Based on Region Homogeneity Constraints Using the
Projection onto Convex Sets Algorithm, IEEE
Transactions on Circuits and Systems for Video
Technology, Vol.12, No.10, 2002, pp 891-897.