A new approach based on the use of a deep neural network and an ensemble of particular classifiers is proposed. This approach is based on use of the novel block of fuzzy generalization for combines classes of objects into semantic groups, each of which corresponds to one or more particular classifiers. As result of processing, the sequence of frames is converted into the annotation of the event occurring in the video for a certain time interval
A new block cipher for image encryption based on multi chaotic systemsTELKOMNIKA JOURNAL
In this paper, a new algorithm for image encryption is proposed based on three chaotic systems which are Chen system,logistic map and two-dimensional (2D) Arnold cat map. First, a permutation scheme is applied to the image, and then shuffled image is partitioned into blocks of pixels. For each block, Chen system is employed for confusion and then logistic map is employed for generating subsititution-box (S-box) to substitute image blocks. The S-box is dynamic, where it is shuffled for each image block using permutation operation. Then, 2D Arnold cat map is used for providing diffusion, after that XORing the result using Chen system to obtain the encrypted image.The high security of proposed algorithm is experimented using histograms, unified average changing intensity (UACI), number of pixels change rate (NPCR), entropy, correlation and keyspace analyses.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
with the help fof alkfjafnalfnlsnclsnclsnvsnvlsnvlds snlksnldsn nlncldnldncldsnclsd anflnfldnfldnfldsc knfldfnlfnlnfldnfldsnfldsnf lkfndslfndslfnldsfnlsdnflsdlflsfnsldnf lsnflsfdnldslds dsnfldsnflsdnflsnldsnf
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
Performance Evaluation of Object Tracking Technique Based on Position VectorsCSCJournals
In this paper, a novel algorithm for moving object tracking based on position vectors has proposed. The position vector of an object in first frame of a video has been extracted based on selection of region of interest. Based on position vector in first frame object direction has shown in nine different directions. We extract nine position vectors for nine different directions. With these position vectors next frame is cropped into nine blocks. We exploit block matching of the first frame with nine blocks of the next frame in a simple feature space by Descrete wavelet transform and dual tree complex wavelet transform. The matched block is considered as tracked object and its position vector is a reference location for the next successive frame. We describe performance evaluation and algorithm in detail to perform simulation experiments of object tracking using different feature vectors which verifies the tracking algorithm efficiency.
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIJNSA Journal
In the recent world, security is a prime important issue, and encryption is one of the best alternative way to ensure security. More over, there are many image encryption schemes have been proposed, each one of them has its own strength and weakness. This paper presents a new algorithm for the image encryption/decryption scheme. This paper is devoted to provide a secured image encryption technique using multiple chaotic based circular mapping. In this paper, first, a pair of sub keys is given by using chaotic logistic maps. Second, the image is encrypted using logistic map sub key and in its transformation leads to diffusion process. Third, sub keys are generated by four different chaotic maps. Based on the initial conditions, each map may produce various random numbers from various orbits of the maps.
Among those random numbers, a particular number and from a particular orbit are selected as a key for the encryption algorithm. Based on the key, a binary sequence is generated to control the encryption algorithm. The input image of 2-D is transformed into a 1- D array by using two different scanning pattern (raster and Zigzag ) and then divided into various sub blocks. Then the position permutation and value permutation is applied to each binary matrix based on multiple chaos maps. Finally the receiver uses the same sub keys to decrypt the encrypted images. The salient features of the proposed image encryption method are loss-less, good peak signal –to noise ratio (PSNR), Symmetric key encryption, less cross correlation, very large number of secret keys, and key-dependent pixel value replacement.
Phoenix: A Weight-based Network Coordinate System Using Matrix Factorizationyeung2000
Yang Chen, Xiao Wang, Cong Shi, Eng Keong Lua, Xiaoming Fu, Beixing Deng, Xing Li. Phoenix: A Weight-based Network Coordinate System Using Matrix Factorization. IEEE Transactions on Network and Service Management, 2011, 8(4):334-347.
A new block cipher for image encryption based on multi chaotic systemsTELKOMNIKA JOURNAL
In this paper, a new algorithm for image encryption is proposed based on three chaotic systems which are Chen system,logistic map and two-dimensional (2D) Arnold cat map. First, a permutation scheme is applied to the image, and then shuffled image is partitioned into blocks of pixels. For each block, Chen system is employed for confusion and then logistic map is employed for generating subsititution-box (S-box) to substitute image blocks. The S-box is dynamic, where it is shuffled for each image block using permutation operation. Then, 2D Arnold cat map is used for providing diffusion, after that XORing the result using Chen system to obtain the encrypted image.The high security of proposed algorithm is experimented using histograms, unified average changing intensity (UACI), number of pixels change rate (NPCR), entropy, correlation and keyspace analyses.
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
with the help fof alkfjafnalfnlsnclsnclsnvsnvlsnvlds snlksnldsn nlncldnldncldsnclsd anflnfldnfldnfldsc knfldfnlfnlnfldnfldsnfldsnf lkfndslfndslfnldsfnlsdnflsdlflsfnsldnf lsnflsfdnldslds dsnfldsnflsdnflsnldsnf
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Over the last few years, neural networks have started penetrating safety critical systems to take decisions in robots, rockets, autonomous driving car, etc. A problem is that these critical systems often have limited computing resources. Often, they use the fixed-point arithmetic for its many advantages (rapidity, compatibility with small memory devices.) In this article, a new technique is introduced to tune the formats (precision) of already trained neural networks using fixed-point arithmetic, which can be implemented using integer operations only. The new optimized neural network computes the output with fixed-point numbers without modifying the accuracy up to a threshold fixed by the user. A fixed-point code is synthesized for the new optimized neural network ensuring the respect of the threshold for any input vector belonging the range [xmin, xmax] determined during the analysis. From a technical point of view, we do a preliminary analysis of our floating neural network to determine the worst cases, then we generate a system of linear constraints among integer variables that we can solve by linear programming. The solution of this system is the new fixed-point format of each neuron. The experimental results obtained show the efficiency of our method which can ensure that the new fixed-point neural network has the same behavior as the initial floating-point neural network.
Performance Evaluation of Object Tracking Technique Based on Position VectorsCSCJournals
In this paper, a novel algorithm for moving object tracking based on position vectors has proposed. The position vector of an object in first frame of a video has been extracted based on selection of region of interest. Based on position vector in first frame object direction has shown in nine different directions. We extract nine position vectors for nine different directions. With these position vectors next frame is cropped into nine blocks. We exploit block matching of the first frame with nine blocks of the next frame in a simple feature space by Descrete wavelet transform and dual tree complex wavelet transform. The matched block is considered as tracked object and its position vector is a reference location for the next successive frame. We describe performance evaluation and algorithm in detail to perform simulation experiments of object tracking using different feature vectors which verifies the tracking algorithm efficiency.
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIJNSA Journal
In the recent world, security is a prime important issue, and encryption is one of the best alternative way to ensure security. More over, there are many image encryption schemes have been proposed, each one of them has its own strength and weakness. This paper presents a new algorithm for the image encryption/decryption scheme. This paper is devoted to provide a secured image encryption technique using multiple chaotic based circular mapping. In this paper, first, a pair of sub keys is given by using chaotic logistic maps. Second, the image is encrypted using logistic map sub key and in its transformation leads to diffusion process. Third, sub keys are generated by four different chaotic maps. Based on the initial conditions, each map may produce various random numbers from various orbits of the maps.
Among those random numbers, a particular number and from a particular orbit are selected as a key for the encryption algorithm. Based on the key, a binary sequence is generated to control the encryption algorithm. The input image of 2-D is transformed into a 1- D array by using two different scanning pattern (raster and Zigzag ) and then divided into various sub blocks. Then the position permutation and value permutation is applied to each binary matrix based on multiple chaos maps. Finally the receiver uses the same sub keys to decrypt the encrypted images. The salient features of the proposed image encryption method are loss-less, good peak signal –to noise ratio (PSNR), Symmetric key encryption, less cross correlation, very large number of secret keys, and key-dependent pixel value replacement.
Phoenix: A Weight-based Network Coordinate System Using Matrix Factorizationyeung2000
Yang Chen, Xiao Wang, Cong Shi, Eng Keong Lua, Xiaoming Fu, Beixing Deng, Xing Li. Phoenix: A Weight-based Network Coordinate System Using Matrix Factorization. IEEE Transactions on Network and Service Management, 2011, 8(4):334-347.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Deep learning lecture - part 1 (basics, CNN)SungminYou
This presentation is a lecture with the Deep Learning book. (Bengio, Yoshua, Ian Goodfellow, and Aaron Courville. MIT press, 2017) It contains the basics of deep learning and theories about the convolutional neural network.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
This presentation talks about neural networks and self organizing maps. In this presentation,Engr. Edgar Caburatan Carrillo II also discusses its applications.
Offline Character Recognition Using Monte Carlo Method and Neural Networkijaia
Human Machine interface are constantly gaining improvements because of increasing development of
computer tools. Handwritten Character Recognition do have various significant applications like form
scanning, verification, validation, or checks reading. Because of the importance of these applications
passionate research in the field of Off-Line handwritten character recognition is going on. The challenge in
recognising the handwritings lies in the nature of humans, having unique styles in terms of font, contours,
etc. This paper presents a novice approach to identify the offline characters; we call it as character divider
approach which can be used after pre-processing stage. We devise an innovative approach for feature
extraction known as vector contour. We also discuss the pros and cons including limitations, of our
approach
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
Encryption is used to securely transmit data in open networks. Each type of data has its own features. With the rapid growth of internet, security of digital images has become more and more important. Therefore different techniques should be used to protect confidential image data from unauthorized access. In this paper an encryption technique based on elliptic curves for securing images to transmit over public channels will be proposed. Encryption and decryption process are given in details with an example. The comparative study of the proposed scheme and the existing scheme is made. Our proposed algorithm is aimed at better encryption of all types of images even ones with uniform background and makes the image encryption scheme more secure. The output encrypted images reveal that the proposed method is robust.
INTRA BLOCK AND INTER BLOCK NEIGHBORING JOINT DENSITY BASED APPROACH FOR JPEG...ijsc
Steganalysis is the method used to detect the presence of any hidden message in a cover medium. A novel
approach based on feature mining on the discrete cosine transform (DCT) domain based approach,
machine learning for steganalysis of JPEG images is proposed. The neighboring joint density on both
intra-block and inter-block are extracted from the DCT coefficient array. After the feature space has been
constructed, it uses SVM like binary classifier for training and classification. The performance of the
proposed method on different Steganographic systems named F5, Pixel Value Differencing, Model Based
Steganography with and without deblocking, JPHS, Steghide etc are analyzed. Individually each feature
and combined features classification accuracy is checked and concludes which provides better
classification.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Keyframe Selection of Frame Similarity to Generate Scene Segmentation Based o...IJECEIAES
Video segmentation has been done by grouping similar frames according to the threshold. Two-frame similarity calculations have been performed based on several operations on the frame: point operation, spatial operation, geometric operation and arithmatic operation. In this research, similarity calculations have been applied using point operation: frame difference, gamma correction and peak signal to noise ratio. Three-point operation has been performed in accordance with the intensity and pixel frame values. Frame differences have been operated based on the pixel value level. Gamma correction has analyzed pixel values and lighting values. The peak signal to noise ratio (PSNR) has been related to the difference value (noise) between the original frame and the next frame. If the distance difference between the two frames was smaller then the two frames were more similar. If two frames had a higher gamma correction factor, then the correction factor would have an increasingly similar effect on the two frames. If the value of PSNR was greater then the comparison of two frames would be more similar. The combination of the three point operation methods would be able to determine several similar frames incorporated in the same segment.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Deep learning lecture - part 1 (basics, CNN)SungminYou
This presentation is a lecture with the Deep Learning book. (Bengio, Yoshua, Ian Goodfellow, and Aaron Courville. MIT press, 2017) It contains the basics of deep learning and theories about the convolutional neural network.
Shot Boundary Detection In Videos Sequences Using Motion ActivitiesCSCJournals
Video segmentation is fundamental to a number of applications related to video retrieval and analysis. To realize the content based video retrieval, the video information should be organized to elaborate the structure of the video. The segmentation video into shot is an important step to make. This paper presents a new method of shot boundaries detection based on motion activities in video sequence. The proposed algorithm is tested on the various video types and the experimental results show that our algorithm is effective and reliably detects shot boundaries.
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
This presentation talks about neural networks and self organizing maps. In this presentation,Engr. Edgar Caburatan Carrillo II also discusses its applications.
Offline Character Recognition Using Monte Carlo Method and Neural Networkijaia
Human Machine interface are constantly gaining improvements because of increasing development of
computer tools. Handwritten Character Recognition do have various significant applications like form
scanning, verification, validation, or checks reading. Because of the importance of these applications
passionate research in the field of Off-Line handwritten character recognition is going on. The challenge in
recognising the handwritings lies in the nature of humans, having unique styles in terms of font, contours,
etc. This paper presents a novice approach to identify the offline characters; we call it as character divider
approach which can be used after pre-processing stage. We devise an innovative approach for feature
extraction known as vector contour. We also discuss the pros and cons including limitations, of our
approach
A Novel Background Subtraction Algorithm for Dynamic Texture ScenesIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
Encryption is used to securely transmit data in open networks. Each type of data has its own features. With the rapid growth of internet, security of digital images has become more and more important. Therefore different techniques should be used to protect confidential image data from unauthorized access. In this paper an encryption technique based on elliptic curves for securing images to transmit over public channels will be proposed. Encryption and decryption process are given in details with an example. The comparative study of the proposed scheme and the existing scheme is made. Our proposed algorithm is aimed at better encryption of all types of images even ones with uniform background and makes the image encryption scheme more secure. The output encrypted images reveal that the proposed method is robust.
INTRA BLOCK AND INTER BLOCK NEIGHBORING JOINT DENSITY BASED APPROACH FOR JPEG...ijsc
Steganalysis is the method used to detect the presence of any hidden message in a cover medium. A novel
approach based on feature mining on the discrete cosine transform (DCT) domain based approach,
machine learning for steganalysis of JPEG images is proposed. The neighboring joint density on both
intra-block and inter-block are extracted from the DCT coefficient array. After the feature space has been
constructed, it uses SVM like binary classifier for training and classification. The performance of the
proposed method on different Steganographic systems named F5, Pixel Value Differencing, Model Based
Steganography with and without deblocking, JPHS, Steghide etc are analyzed. Individually each feature
and combined features classification accuracy is checked and concludes which provides better
classification.
Similar to Semantic Video Segmentation with Using Ensemble of Particular Classifiers and a Deep Neural Network for Systems of Detecting Abnormal Situations
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Keyframe Selection of Frame Similarity to Generate Scene Segmentation Based o...IJECEIAES
Video segmentation has been done by grouping similar frames according to the threshold. Two-frame similarity calculations have been performed based on several operations on the frame: point operation, spatial operation, geometric operation and arithmatic operation. In this research, similarity calculations have been applied using point operation: frame difference, gamma correction and peak signal to noise ratio. Three-point operation has been performed in accordance with the intensity and pixel frame values. Frame differences have been operated based on the pixel value level. Gamma correction has analyzed pixel values and lighting values. The peak signal to noise ratio (PSNR) has been related to the difference value (noise) between the original frame and the next frame. If the distance difference between the two frames was smaller then the two frames were more similar. If two frames had a higher gamma correction factor, then the correction factor would have an increasingly similar effect on the two frames. If the value of PSNR was greater then the comparison of two frames would be more similar. The combination of the three point operation methods would be able to determine several similar frames incorporated in the same segment.
The study evaluates three background subtraction techniques. The techniques ranges from very basic
algorithm to state of the art published techniques categorized based on speed, memory requirements and
accuracy. Such a review can effectively guide the designer to select the most suitable method for a given
application in a principled way. The algorithms used in the study ranges from varying levels of accuracy
and computational complexity. Few of them can also deal with real time challenges like rain, snow, hails,
swaying branches, objects overlapping, varying light intensity or slow moving objects.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Machine learning based augmented reality for improved learning application th...IJECEIAES
Detection of objects and their location in an image are important elements of current research in computer vision. In May 2020, Meta released its state-ofthe-art object-detection model based on a transformer architecture called detection transformer (DETR). There are several object-detection models such as region-based convolutional neural network (R-CNN), you only look once (YOLO) and single shot detectors (SSD), but none have used a transformer to accomplish this task. These models mentioned earlier, use all sorts of hyperparameters and layers. However, the advantages of using a transformer pattern make the architecture simple and easy to implement. In this paper, we determine the name of a chemical experiment through two steps: firstly, by building a DETR model, trained on a customized dataset, and then integrate it into an augmented reality mobile application. By detecting the objects used during the realization of an experiment, we can predict the name of the experiment using a multi-class classification approach. The combination of various computer vision techniques with augmented reality is indeed promising and offers a better user experience.
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
An optimized discrete wavelet transform compression technique for image trans...IJECEIAES
Transferring images in a wireless multimedia sensor network (WMSN) knows a fast development in both research and fields of application. Nevertheless, this area of research faces many problems such as the low quality of the received images after their decompression, the limited number of reconstructed images at the base station, and the high-energy consumption used in the process of compression and decompression. In order to fix these problems, we proposed a compression method based on the classic discrete wavelet transform (DWT). Our method applies the wavelet compression technique multiple times on the same image. As a result, we found that the number of received images is higher than using the classic DWT. In addition, the quality of the received images is much higher compared to the standard DWT. Finally, the energy consumption is lower when we use our technique. Therefore, we can say that our proposed compression technique is more adapted to the WMSN environment.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
Complex Background Subtraction Using Kalman FilterIJERA Editor
Background subtraction from dynamic background, At any location of the scene, this system extract a sequence of regular video bricks, i.e., video volumes spanning over both spatial and temporal domain. The background modeling is thus posed as pursuing subspaces within the video bricks while adapting the scene variations. For each sequence of video bricks, it pursues the subspace by employing the auto regressive moving average model that jointly characterizes the appearance consistency and temporal coherence of the observations. During online processing, it use tracking algorithm kalman’s filter for background/foreground classification and incrementally update the subspaces to cope with disturbances from foreground objects and scene changes.
U-Net is a convolutional neural network (CNN) architecture designed for semantic segmentation tasks, especially in the field of medical image analysis. It was introduced by Olaf Ronneberger, Philipp Fischer, and Thomas Brox in 2015. The name "U-Net" comes from its U-shaped architecture.
Key features of the U-Net architecture:
U-Shaped Design: U-Net consists of a contracting path (downsampling) and an expansive path (upsampling). The architecture resembles the letter "U" when visualized.
Contracting Path (Encoder):
The contracting path involves a series of convolutional and pooling layers.
Each convolutional layer is followed by a rectified linear unit (ReLU) activation function and possibly other normalization or activation functions.
Pooling layers (usually max pooling) reduce spatial dimensions, capturing high-level features.
Expansive Path (Decoder):
The expansive path involves a series of upsampling and convolutional layers.
Upsampling is achieved using transposed convolution (also known as deconvolution or convolutional transpose).
Skip connections are established between corresponding layers in the contracting and expansive paths. These connections help retain fine-grained spatial information during the upsampling process.
Skip Connections:
Skip connections concatenate feature maps from the contracting path to the corresponding layers in the expansive path.
These connections facilitate the fusion of low-level and high-level features, aiding in precise localization.
Final Layer:
The final layer typically uses a convolutional layer with a softmax activation function for multi-class segmentation tasks, providing probability scores for each class.
U-Net's architecture and skip connections help address the challenge of segmenting objects with varying sizes and shapes, which is often encountered in medical image analysis. Its success in this domain has led to its application in other areas of computer vision as well.
The U-Net architecture has also been extended and modified in various ways, leading to improvements like the U-Net++ architecture and variations with attention mechanisms, which further enhance the segmentation performance.
U-Net's intuitive design and effectiveness in semantic segmentation tasks have made it a cornerstone in the field of medical image analysis and an influential architecture for researchers working on segmentation challenges.
Double layer security using visual cryptography and transform based steganogr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Novel Approach for Moving Object Detection from Dynamic BackgroundIJERA Editor
In computer vision application, moving object detection is the key technology for intelligent video monitoring
system. Performance of an automated visual surveillance system considerably depends on its ability to detect
moving objects in thermodynamic environment. A subsequent action, such as tracking, analyzing the motion or
identifying objects, requires an accurate extraction of the foreground objects, making moving object detection a
crucial part of the system. The aim of this paper is to detect real moving objects from un-stationary background
regions (such as branches and leafs of a tree or a flag waving in the wind), limiting false negatives (objects
pixels that are not detected) as much as possible. In addition, it is assumed that the models of the target objects
and their motion are unknown, so as to achieve maximum application independence (i.e. algorithm works under
the non-prior training).
A NOVEL IMAGE STEGANOGRAPHY APPROACH USING MULTI-LAYERS DCT FEATURES BASED ON...ijma
Steganography is the science of hidden data in the cover image without any updating of the cover image.
The recent research of the steganography is significantly used to hide large amount of information within
an image and/or audio files. This paper proposed a new novel approach for hiding the data of secret image
using Discrete Cosine Transform (DCT) features based on linear Support Vector Machine (SVM)
classifier. The DCT features are used to decrease the image redundant information. Moreover, DCT is
used to embed the secrete message based on the least significant bits of the RGB. Each bit in the cover
image is changed only to the extent that is not seen by the eyes of human. The SVM used as a classifier to
speed up the hiding process via the DCT features. The proposed method is implemented and the results
show significant improvements. In addition, the performance analysis is calculated based on the
parameters MSE, PSNR, NC, processing time, capacity, and robustness.
Similar to Semantic Video Segmentation with Using Ensemble of Particular Classifiers and a Deep Neural Network for Systems of Detecting Abnormal Situations (20)
Securing Cloud Computing Through IT GovernanceITIIIndustries
Lack of alignment between information technology (IT) and the business is a problem facing many organizations. Most organizations, today, fundamentally depend on IT. When IT and the business are aligned in an organization, IT delivers what the business needs and the business is able to deliver what the market needs. IT has become a strategic function for most organizations, and it is imperative that IT and business are aligned. IT governance is one of the most powerful ways to achieve IT to business alignment. Furthermore, as the use of cloud computing for delivering IT functions becomes pervasive, organizations using cloud computing must effectively apply IT governance to it. While cloud computing presents tremendous
opportunities, it comes with risks as well. Information security
is one of the top risks in cloud computing. Thus, IT governance must be applied to cloud computing information security to help manage the risks associated with cloud computing information security. This study advances knowledge by extending IT governance to cloud computing and information security governance.
Information Technology in Industry(ITII) - November Issue 2018ITIIIndustries
IT Industry publishes original research articles, review articles, and extended versions of conference papers. Articles resulting from research of both theoretical and/or practical natures performed by academics and/or industry practitioners are welcome. IT in Industry aims to become a leading IT journal with a high impact factor.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term
master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
Dimensionality Reduction and Feature Selection Methods for Script Identificat...ITIIIndustries
The goal of this research is to explore effects of dimensionality reduction and feature selection on the problem of script identification from images of printed documents. The kadjacent segment is ideal for this use due to its ability to capture visual patterns. We have used principle component analysis to reduce the size of our feature matrix to a handier size that can be trained easily, and experimented by including varying combinations of dimensions of the super feature set. A modular
approach in neural network was used to classify 7 languages – Arabic, Chinese, English, Japanese, Tamil, Thai and Korean.
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
Accurately extracting foreground objects is the problem of isolating the foreground in images and video, called image matting which has wide applications in digital photography. This problem is severely ill-posed in the sense that, at each pixel, one must estimate the foreground and background pixels and the so-called alpha value from only pixel information. The most recent work in natural image matting rely on local smoothness assumptions about foreground and background colours on which a cost function has been established. In this paper, we propose an extension to the class of affinity based matting techniques by incorporating local manifold structural
information to produce both a smoother matte based on the socalled improved Locally Linear Embedding. We illustrate our new algorithm using the standard benchmark images and very comparable results have been obtained.
Annotating Retina Fundus Images for Teaching and Learning Diabetic Retinopath...ITIIIndustries
With the improvement in IT industry, more and more application of computer software is introduced in teaching and learning. In this paper, we discuss the development process of such software. Diabetic Retinopathy is a common complication for diabetic patients. It may cause sight loss if not treated early. There are several stages of this disease. Fundus imagery is required to identify the stage and severity of the disease. Due to the lack of proper dataset of the fundus images and proper annotation, it is very difficult to perform research on this topic. Moreover, medical students are often facing difficulty with identifying the diseases in later stage of their practice as they may not have seen a sample of all of the stages of Diabetic Retinopathy problems. To mitigate the problem, we have collected fundus images from different geographic area of Bangladesh and designed an annotation software to store information about the patient, the infection level and their locations in the images. Sometimes, it is difficult to select all appropriate pixels of the infected region. To resolve the issue, we have introduced a K nearest neighbor (KNN) based technique to accurately select the region of interest (ROI). Once an expert (ophthalmologist) has annotated the images, the software can be used by the students for learning.
A Framework for Traffic Planning and Forecasting using Micro-Simulation Calib...ITIIIndustries
This paper presents the application of microsimulation for traffic planning and forecasting, and proposes a new framework to model complex traffic conditions by calibrating and adjusting traffic parameters of a microsimulation model. By using an open source micro-simulator package, TRANSIMS, in this study, animated and numerical results were produced and analysed. The framework of traffic model calibration was evaluated for its usefulness and practicality. Finally, we discuss future applications such as providing end users with real time traffic information through Intelligent Transport System (ITS) integration.
Investigating Tertiary Students’ Perceptions on Internet SecurityITIIIndustries
Internet security threats have grown from just simple viruses to various forms of computer hacking, scams, impersonation, cyber bullying, and spyware. The Internet has great influence on most people. It has profound influence and one can spend endless hours on internet activities. In particular, youth engage in more online activities than any other age group. Excessive internet usage is an emerging threat that has negative impacts on these youth; hence it is vital to investigate youths' online behavior. This work studies tertiary students’ risk awareness, and provides some findings that allow us to understand their knowledge on risks and their behavior towards online activities. It reveals several important online issues amongst tertiary students; Firstly, the lack of online security awareness; second, a lack of awareness and information about the dangers of rootkits, internet cookies and spyware; thirdly, female students are more unflinching than male students when commenting on social networking sites; fourthly, students are cautious only when obvious security warnings are present; and finally, their usage of internet hotspots is common without fully understanding its associated danger. These findings enable us to recommend types of internet security habits and safety practices that students should adopt in future when they are exposed to online activities. A more holistic approach was considered which aims to minimize any future risks and dangers with online activities involving students.
Blind Image Watermarking Based on Chaotic MapsITIIIndustries
Security of a watermark refers to its resistance to unauthorized detecting and decoding, while watermark robustness refers to the watermark’s resistance against common processing. Many watermarking schemes emphasize robustness more than security. However, a robust watermark is not enough to accomplish protection because the range of hostile attacks is not limited to common processing and distortions. In this paper, we give consideration to watermark security. To achieve this, we employ chaotic maps due to their extreme sensitivity to the initial values. If one fails to provide these values, the watermark will be wrongly extracted. While the chaotic maps provide perfect watermarking security, the proposed scheme is also intended to achieve robustness.
Programmatic detection of spatial behaviour in an agent-based modelITIIIndustries
The automated detection of aspects of spatial behaviour in an agent-based model is necessary for model testing and analysis. In this paper we compare four predictors of herding behaviour in a model of a grazing herbivore. We find that a) the mean number of neighbours adjusted to account for population variation and b) the mean Hamming distance between rows of the two-dimensional environment can be used to detect herding. Visual inspection of the model behaviour revealed that herding occurs when the herbivore mobility reaches a threshold level. Using this threshold we identify a limits for these predictors to use in the program code. These results apply only to one set of parameters and environment size; future research will involve a wider parameter space.
Design of an IT Capstone Subject - Cloud RoboticsITIIIndustries
This paper describes the curriculum of the three year IT undergraduate program at La Trobe University, and the faculty requirements in designing a capstone subject, followed by the ACM’s recommended IT curriculum covering the five pillars of the IT discipline. Cloud robotics, a broad multidisciplinary research area, requiring expertise in all five pillars with mechatronics, is an ideal candidate to offer capstone experiences to IT students. Therefore, in this paper, we propose a long term master project in developing a cloud robotics testbed, with many capstone sub-projects spanning across the five IT pillars, to meet the objectives of capstone experience. This paper also describes the design and implementation of the testbed, and proposes potential capstone projects for students with different interests.
A Smart Fuzzing Approach for Integer Overflow DetectionITIIIndustries
Fuzzing is one of the most commonly used methods to detect software vulnerabilities, a major cause of information security incidents. Although it has advantages of simple design and low error report, its efficiency is usually poor. In this paper we present a smart fuzzing approach for integer overflow detection and a tool, SwordFuzzer, which implements this approach. Unlike standard fuzzing techniques, which randomly change parts of the input file with no information about the underlying syntactic structure of the file, SwordFuzzer uses online dynamic taint analysis to identify which bytes in the input file are used in security sensitive operations and then focuses on mutating such bytes. Thus, the generated inputs are more likely to trigger potential vulnerabilities. We evaluated SwordFuzzer with an example program and a number of real-world applications. The experimental results show that SwordFuzzer can accurately locate the key bytes of the input file and dramatically improve the effectiveness of fuzzing in detecting real-world vulnerabilities
The banking experience for many people today is fundamentally an application of technology to be able to carry out their financial tasks. While the need to visit a bank branch remains essential for a number of activities, increasingly the need to support mobile usage is becoming the central focus of many bank strategies. The core banking systems that process financial transactions must remain highly available and able to support large volumes of activity. These systems represent a long term investment for banks and when the need arises to modernize these large systems, the transformation initiative is often very expensive and of high risk. We present in this paper our experiences in bank modernization and transformation, and outline the strategies for rolling out these large programs. As banking institutions embark upon transformation programs to upgrade their banking channels and core banking systems, it is hoped that the insights presented here are useful as a framework to support these initiatives.
Detecting Fraud Using Transaction Frequency DataITIIIndustries
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. In this paper, we present a fraud detection method which detects irregular frequency of transaction usage in an Enterprise Resource Planning (ERP) system. We discuss the design, development and empirical evaluation of outlier detection and distance measuring techniques to detect frequency-based anomalies within an individual user’s profile, relative to other similar users. Primarily, we propose three automated techniques: a univariate method, called Boxplot which is based on the sample’s median; and two multivariate methods which use Euclidean distance, for detecting transaction frequency anomalies within each transaction profile. The two multivariate approaches detect potentially fraudulent activities by identifying: (1) users where the Euclidean distance between their transaction-type set is above a certain threshold and (2) users/data points that lie far apart from other users/clusters or represent a small cluster size, using k-means clustering. The proposed methodology allows an auditor to investigate the transaction frequency anomalies and adjust the different parameters, such as the outlier threshold and the Euclidean distance threshold values to tune the number of alerts. The novelty of the proposed technique lies in its ability to automatically trigger alerts from transaction profiles, based on transaction usage performed over a period of time. Experiments were conducted using a real dataset obtained from the production client of a large organization using SAP R/3 (presently the most predominant ERP system), to run its business. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Mapping the Cybernetic Principles of Viable System Model to Enterprise Servic...ITIIIndustries
This paper describes the results of a theoretical mapping of the cybernetic principles of the Viable System Model (VSM) to an Enterprise Service Bus (ESB) model, with the aim to identify the management principles for the integration of services at all levels in the enterprise. This enrichment directly contributes to the viability of service-oriented systems and the justification of Business/IT alignment within enterprise. The model was identified to be suitable for further adaption in the industrial setting planned within Australian governmental departments.
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...ITIIIndustries
To achieve a natural interaction in augmented reality environment, we have suggested to use markerless visionbased two-handed gestures for the interaction; with an outstretched hand and a pointing hand used as virtual object registration plane and pointing device respectively. However, twohanded interaction always causes mutual occlusion which jeopardizes the hand gesture recognition. In this paper, we present a solution for two-hand occlusion by using watershed transform. The main idea is to start from a two-hand occlusion image in binary format, then form a grey-scale image based on the distance of each non-object pixel to object pixel. The watershed algorithm is applied to the negation of the grey scaled image to form watershed lines which separate the two hands. Fingertips are then identified and each hand is recognized based on the number of fingertips on each hand. The outstretched hand is assumed to contain 5 fingertips and the pointing device contains less than 5 fingertips. An example of applying our result in hand and virtual object interaction is displayed at the end of the paper.
Speech Feature Extraction and Data VisualisationITIIIndustries
—This paper presents a signal processing approach to analyse and identify accent discriminative features of four groups of English as a second language (ESL) speakers, including Chinese, Indian, Japanese, and Korean. The features used for speech recognition include pitch, stress, formant frequencies, the Mel frequency coefficient, log frequency coefficient, and the intensity and duration of vowels spoken. This paper presents our study using the Matlab Speech Analysis Toolbox, and highlights how data processing can be automated and results visualised. The proposed algorithm achieved an average success rate of 57.3% in identifying vowels spoken in a speech by the four nonnative English speaker groups.
Bayesian-Network-Based Algorithm Selection with High Level Representation Fee...ITIIIndustries
A real-world intelligent system consists of three basic modules: environment recognition, prediction (or estimation), and behavior planning. To obtain high quality results in these modules, high speed processing and real time adaptability on a case by case basis are required. In the environment recognition module many different algorithms and algorithm networks exist with varying performance. Thus, a mechanism that selects the best possible algorithm is required. To solve this problem we are using an algorithm selection approach to the problem of natural image understanding. This selection mechanism is based on machine learning; a bottom-up algorithm selection from real-world image features and a top-down algorithm selection using information obtained from a high level symbolic world description and algorithm suitability. The algorithm selection method iterates for each input image until the high-level description cannot be improved anymore. In this paper we present a method of iterative composition of the high level description. This step by step approach allows us to select the best result for each region of the image by evaluating all the intermediary representations and finally keep only the best one.
Instance Selection and Optimization of Neural NetworksITIIIndustries
Credit scoring is an important tool in financial institutions, which can be used in credit granting decision. Credit applications are marked by credit scoring models and those with high marks will be treated as “good”, while those with low marks will be regarded as “bad”. As data mining technique develops, automatic credit scoring systems are warmly welcomed for their high efficiency and objective judgments. Many machine learning algorithms have been applied in training credit scoring models, and ANN is one of them with good performance. This paper presents a higher accuracy credit scoring model based on MLP neural networks trained with back propagation algorithm. Our work focuses on enhancing credit scoring models in three aspects: optimize data distribution in datasets using a new method called Average Random Choosing; compare effects of training-validation-test instances numbers; and find the most suitable number of hidden units. Another contribution of this paper is summarizing the tendency of scoring accuracy of models when the number of hidden units increases. The experiment results show that our methods can achieve high credit scoring accuracy with imbalanced datasets. Thus, credit granting decision can be made by data mining methods using MLP neural networks.
Signature Forgery and the Forger – An Assessment of Influence on Handwritten ...ITIIIndustries
Signatures are widely used as a form of personal authentication. Despite ubiquity in deployment, individual signatures are relatively easy to forge, especially when only the static ‘pictorial’ outcome of the signature is considered at verification time. In this study, we explore opinions on signature usage for verification purposes, and how individuals rate a particular third-party signature in terms of ease of forgeability and their own ability to forge. We examine responses with respect to an individual’s experience of the forgeability/complexity of their own signature. Our study shows that past experience does not generally have an effect on perceived signature complexity nor the perceived effectiveness of an individual to themselves forge a signature. In assessing forgeability, most subjects cite the overall signature complexity and distinguishing features in reaching this decision. Furthermore, our research indicates that individuals typically vary their signature according to the scenario but generally little effort into the production of the signature.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.