With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
Applying Soft Computing Techniques in Information RetrievalIJAEMSJORNAL
There is plethora of information available over the internet on daily basis and to retrieve meaningful effective information using usual IR methods is becoming a cumbersome task. Hence this paper summarizes the different soft computing techniques available that can be applied to information retrieval systems to improve its efficiency in acquiring knowledge related to a user’s query.
Computation of Neural Network using C# with Respect to BioinformaticsSarvesh Kumar
Neural network is the emerging field in the era of globalization which is fully based on the concept of soft-computing technique and bioinformatics. In the competitive market of new development process, Bioinformatics play the vital role to give the process of integration aspect as multidisciplinary subject like- biological Science, medicine science, computer science, engineering, chemical science, physical science as well as mathematical science who gives the experiences of artificial activities of human behaviour in the form of software. Now a days neural Network and its multidimensional approach give the idea for solving bioinformatics problems to handle imprecision, uncertainty in large and complex search spaces. This paper gives the emphasis on multidimensional approaches of neural network with soft computing paradigm using C# in bioinformatics with integrative research methodology. The overall process of multidimensional approaches of bioinformatics neurons can also be understood with the help of flow chart and diagram is the major concerned.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
Applying Soft Computing Techniques in Information RetrievalIJAEMSJORNAL
There is plethora of information available over the internet on daily basis and to retrieve meaningful effective information using usual IR methods is becoming a cumbersome task. Hence this paper summarizes the different soft computing techniques available that can be applied to information retrieval systems to improve its efficiency in acquiring knowledge related to a user’s query.
Computation of Neural Network using C# with Respect to BioinformaticsSarvesh Kumar
Neural network is the emerging field in the era of globalization which is fully based on the concept of soft-computing technique and bioinformatics. In the competitive market of new development process, Bioinformatics play the vital role to give the process of integration aspect as multidisciplinary subject like- biological Science, medicine science, computer science, engineering, chemical science, physical science as well as mathematical science who gives the experiences of artificial activities of human behaviour in the form of software. Now a days neural Network and its multidimensional approach give the idea for solving bioinformatics problems to handle imprecision, uncertainty in large and complex search spaces. This paper gives the emphasis on multidimensional approaches of neural network with soft computing paradigm using C# in bioinformatics with integrative research methodology. The overall process of multidimensional approaches of bioinformatics neurons can also be understood with the help of flow chart and diagram is the major concerned.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Towards A More Secure Web Based Tele Radiology System: A Steganographic ApproachCSCJournals
While it is possible to make a patient's medical images available to a practicing radiologist online e.g. through open network systems inter connectivity and email attachments, these methods don't guarantee the security, confidentiality and tamper free reliability required in a medical information system infrastructure. The possibility of securely and covertly transmitting such medical images remotely for clinical interpretation and diagnosis through a secure steganographic technique was the focus of this study.
We propose a method that uses an Enhanced Least Significant Bit (ELSB) steganographic insertion method to embed a patient's Medical Image (MI) in the spatial domain of a cover digital image and his/her health records in the frequency domain of the same cover image as a watermark to ensure tamper detection and nonrepudiation. The ELSB method uses the Marsenne Twister (MT) Pseudo Random Number Generator (PRNG) to randomly embed and conceal the patient's data in the cover image. This technique significantly increases the imperceptibility of the hidden information to steganalysis thereby enhancing the security of the embedded patient's data.
In measuring the effectiveness of the proposed method, the study adopted the Design Science Research (DSR) methodology, a paradigm for problem solving in computing and Information Systems (IS) that involves design and implementation of artefacts and methods considered novel and the analytical testing of the performance of such artefacts in pursuit of understanding and enhancing an existing method, artefact or practice.
The fidelity measures of the stego images from the proposed method were compared with those from the traditional Least Significant Bit (LSB) method in order to establish the imperceptibility of the embedded information. The results demonstrated improvements of between 1 to 2.6 decibels (dB) in the Peak Signal to Noise Ratio (PSNR), and up to 0.4 MSE ratios for the proposed method.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
Tutorial delivered at ECML-PKDD 2021.
TL;DR: This tutorial reviews recent developments on drug discovery using machine learning methods.
Powered by neural networks, modern machine learning has enjoyed great successes in data-intensive domains such as computer vision and languages where human can naturally perform well. Machine learning equipped with reasoning is now accelerating fields that traditionally require deep expertise such as physics, chemistry and biomedicine. This tutorial provides an overview of how machine learning and reasoning are speeding up and lowering the cost of drug discovery. This includes how machine learning can help in wide range of areas such as novel molecule identification, protein representation, drug-target binding, drug re-purposing, generative drug design, chemical reaction, retrosynthesis planning, drug-drug interaction, and safety assessment. We will also discuss relevant machine learning models for graph classification, molecular graph transformation, drug generation using deep generative models and reinforcement learning, and chemical reasoning.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
An Extensive Review on Generative Adversarial Networks GAN’sijtsrd
This paper is to provide a high level understanding of Generative Adversarial Networks. This paper will be covering the working of GAN’s by explaining the background idea of the framework, types of GAN’s in the industry, it’s advantages and disadvantages, history of how GAN’s are developed and enhanced along the timeline and some applications where GAN’s outperforms themselves. Atharva Chitnavis | Yogeshchandra Puranik "An Extensive Review on Generative Adversarial Networks (GAN’s)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42357.pdf Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42357/an-extensive-review-on-generative-adversarial-networks-gan’s/atharva-chitnavis
Survey on evolutionary computation tech techniques and its application in dif...ijitjournal
In computer science, 'evolutionary computation' is an algorithmic tool based on evolution. It implements
random variation, reproduction and selection by altering and moving data within a computer. It helps in
building, applying and studying algorithms based on the Darwinian principles of natural selection. In this
paper, studies about different evolutionary computation techniques used in some applications specifically
image processing, cloud computing and grid computing is carried out briefly. This work is an effort to help
researchers from different fields to have knowledge on the techniques of evolutionary computation
applicable in the above mentioned areas.
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Observer/Controller and Ontology/Rule-based Architecture:
A Design Approach for Context-aware Pervasive Computing Systems.
By:
Amina HAMEURLAINE
January 21st, 2016
University of Constantine 2 -Abdelhamid Mehri
Faculty of New Technologies of Information and Communication
Department of Computer Sciences and Applications
MISC Laboratory
Applications of Artificial Neural Networks in Civil EngineeringPramey Zode
An artificial brain-like network based on certain mathematical algorithms developed using a numerical computing environment is called as an ‘Artificial Neural Network (ANN)’. Many civil engineering problems which need understanding of physical processes are found to be time consuming and inaccurate to evaluate using conventional approaches. In this regard, many ANNs have been seen as a reliable and practical alternative to solve such problems. Literature review reveals that ANNs have already being used in solving numerous civil engineering problems. This study explains some cases where ANNs have been used and its future scope is also discussed.
Proposing a new method of image classification based on the AdaBoost deep bel...TELKOMNIKA JOURNAL
Image classification has different applications. Up to now, various algorithms have been presented
for image classification. Each of these methods has its own weaknesses and strengths. Reducing error rate
is an issue which many researches have been carried out about it. This research intends to optimize
the problem with hybrid methods and deep learning. The hybrid methods were developed to improve
the results of the single-component methods. On the other hand, a deep belief network (DBN) is a generative
probabilistic modelwith multiple layers of latent variables and is used to solve the unlabeled problems. In
fact, this method is anunsupervised method, in which all layers are one-way directed layers except for
the last layer. So far, various methods have been proposed for image classification, and the goal of this
research project was to use a combination of the AdaBoost method and the deep belief network method to
classify images. The other objective was to obtain better results than the previous results. In this project, a
combination of the deep belief network and AdaBoost method was used to boost learning and the network
potential was enhanced by making the entire network recursive. This method was tested on the MINIST
dataset and the results were indicative of a decrease in the error rate with the proposed method as compared
to the AdaBoost and deep belief network methods.
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
DATA AUGMENTATION TECHNIQUES AND TRANSFER LEARNING APPROACHES APPLIED TO FACI...ijaia
The face expression is the first thing we pay attention to when we want to understand a person’s state of
mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research
field. In this paper, because the small size of available training datasets, we propose a novel data
augmentation technique that improves the performances in the recognition task. We apply geometrical
transformations and build from scratch GAN models able to generate new synthetic images for each
emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with
different architectures. To measure the generalization ability of the models, we apply extra-database
protocol approach, namely we train models on the augmented versions of training dataset and test them on
two different databases. The combination of these techniques allows to reach average accuracy values of
the order of 85% for the InceptionResNetV2 model.
REVIEWING PROCESS MINING APPLICATIONS AND TECHNIQUES IN EDUCATIONijaia
Process Mining (PM) emerged from business process management but has recently been applied to
educational data and has been found to facilitate the understanding of the educational process.
Educational Process Mining (EPM) bridges the gap between process analysis and data analysis, based on
the techniques of model discovery, conformance checking and extension of existing process models. We
present a systematic review of the recent and current status of research in the EPM domain, focusing on
application domains, techniques, tools and models, to highlight the use of EPM in comprehending and
improving educational processes.
Tutorial delivered at ECML-PKDD 2021.
TL;DR: This tutorial reviews recent developments on drug discovery using machine learning methods.
Powered by neural networks, modern machine learning has enjoyed great successes in data-intensive domains such as computer vision and languages where human can naturally perform well. Machine learning equipped with reasoning is now accelerating fields that traditionally require deep expertise such as physics, chemistry and biomedicine. This tutorial provides an overview of how machine learning and reasoning are speeding up and lowering the cost of drug discovery. This includes how machine learning can help in wide range of areas such as novel molecule identification, protein representation, drug-target binding, drug re-purposing, generative drug design, chemical reaction, retrosynthesis planning, drug-drug interaction, and safety assessment. We will also discuss relevant machine learning models for graph classification, molecular graph transformation, drug generation using deep generative models and reinforcement learning, and chemical reasoning.
MITIGATION TECHNIQUES TO OVERCOME DATA HARM IN MODEL BUILDING FOR MLijaia
Given the impact of Machine Learning (ML) on individuals and the society, understanding how harm might
be occur throughout the ML life cycle becomes critical more than ever. By offering a framework to
determine distinct potential sources of downstream harm in ML pipeline, the paper demonstrates the
importance of choices throughout distinct phases of data collection, development, and deployment that
extend far beyond just model training. Relevant mitigation techniques are also suggested for being used
instead of merely relying on generic notions of what counts as fairness.
An Extensive Review on Generative Adversarial Networks GAN’sijtsrd
This paper is to provide a high level understanding of Generative Adversarial Networks. This paper will be covering the working of GAN’s by explaining the background idea of the framework, types of GAN’s in the industry, it’s advantages and disadvantages, history of how GAN’s are developed and enhanced along the timeline and some applications where GAN’s outperforms themselves. Atharva Chitnavis | Yogeshchandra Puranik "An Extensive Review on Generative Adversarial Networks (GAN’s)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42357.pdf Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42357/an-extensive-review-on-generative-adversarial-networks-gan’s/atharva-chitnavis
Survey on evolutionary computation tech techniques and its application in dif...ijitjournal
In computer science, 'evolutionary computation' is an algorithmic tool based on evolution. It implements
random variation, reproduction and selection by altering and moving data within a computer. It helps in
building, applying and studying algorithms based on the Darwinian principles of natural selection. In this
paper, studies about different evolutionary computation techniques used in some applications specifically
image processing, cloud computing and grid computing is carried out briefly. This work is an effort to help
researchers from different fields to have knowledge on the techniques of evolutionary computation
applicable in the above mentioned areas.
A novel ensemble modeling for intrusion detection system IJECEIAES
Vast increase in data through internet services has made computer systems more vulnerable and difficult to protect from malicious attacks. Intrusion detection systems (IDSs) must be more potent in monitoring intrusions. Therefore an effectual Intrusion Detection system architecture is built which employs a facile classification model and generates low false alarm rates and high accuracy. Noticeably, IDS endure enormous amounts of data traffic that contain redundant and irrelevant features, which affect the performance of the IDS negatively. Despite good feature selection approaches leads to a reduction of unrelated and redundant features and attain better classification accuracy in IDS. This paper proposes a novel ensemble model for IDS based on two algorithms Fuzzy Ensemble Feature selection (FEFS) and Fusion of Multiple Classifier (FMC). FEFS is a unification of five feature scores. These scores are obtained by using feature-class distance functions. Aggregation is done using fuzzy union operation. On the other hand, the FMC is the fusion of three classifiers. It works based on Ensemble decisive function. Experiments were made on KDD cup 99 data set have shown that our proposed system works superior to well-known methods such as Support Vector Machines (SVMs), K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANNs). Our examinations ensured clearly the prominence of using ensemble methodology for modeling IDSs, and hence our system is robust and efficient.
Observer/Controller and Ontology/Rule-based Architecture:
A Design Approach for Context-aware Pervasive Computing Systems.
By:
Amina HAMEURLAINE
January 21st, 2016
University of Constantine 2 -Abdelhamid Mehri
Faculty of New Technologies of Information and Communication
Department of Computer Sciences and Applications
MISC Laboratory
Applications of Artificial Neural Networks in Civil EngineeringPramey Zode
An artificial brain-like network based on certain mathematical algorithms developed using a numerical computing environment is called as an ‘Artificial Neural Network (ANN)’. Many civil engineering problems which need understanding of physical processes are found to be time consuming and inaccurate to evaluate using conventional approaches. In this regard, many ANNs have been seen as a reliable and practical alternative to solve such problems. Literature review reveals that ANNs have already being used in solving numerous civil engineering problems. This study explains some cases where ANNs have been used and its future scope is also discussed.
Proposing a new method of image classification based on the AdaBoost deep bel...TELKOMNIKA JOURNAL
Image classification has different applications. Up to now, various algorithms have been presented
for image classification. Each of these methods has its own weaknesses and strengths. Reducing error rate
is an issue which many researches have been carried out about it. This research intends to optimize
the problem with hybrid methods and deep learning. The hybrid methods were developed to improve
the results of the single-component methods. On the other hand, a deep belief network (DBN) is a generative
probabilistic modelwith multiple layers of latent variables and is used to solve the unlabeled problems. In
fact, this method is anunsupervised method, in which all layers are one-way directed layers except for
the last layer. So far, various methods have been proposed for image classification, and the goal of this
research project was to use a combination of the AdaBoost method and the deep belief network method to
classify images. The other objective was to obtain better results than the previous results. In this project, a
combination of the deep belief network and AdaBoost method was used to boost learning and the network
potential was enhanced by making the entire network recursive. This method was tested on the MINIST
dataset and the results were indicative of a decrease in the error rate with the proposed method as compared
to the AdaBoost and deep belief network methods.
DEEP-LEARNING-BASED HUMAN INTENTION PREDICTION WITH DATA AUGMENTATIONijaia
Data augmentation has been broadly applied in training deep-learning models to increase the diversity of
data. This study ingestigates the effectiveness of different data augmentation methods for deep-learningbased human intention prediction when only limited training data is available. A human participant pitches
a ball to nine potential targets in our experiment. We expect to predict which target the participant pitches
the ball to. Firstly, the effectiveness of 10 data augmentation groups is evaluated on a single-participant
data set using RGB images. Secondly, the best data augmentation method (i.e., random cropping) on the
single-participant data set is further evaluated on a multi-participant data set to assess its generalization
ability. Finally, the effectiveness of random cropping on fusion data of RGB images and optical flow is
evaluated on both single- and multi-participant data sets. Experiment results show that: 1) Data
augmentation methods that crop or deform images can improve the prediction performance; 2) Random
cropping can be generalized to the multi-participant data set (prediction accuracy is improved from 50%
to 57.4%); and 3) Random cropping with fusion data of RGB images and optical flow can further improve
the prediction accuracy from 57.4% to 63.9% on the multi-participant data set.
Efficiency of LSB steganography on medical information IJECEIAES
The development of the medical field had led to the transformation of communication from paper information into the digital form. Medical information security had become a great concern as the medical field is moving towards the digital world and hence patient information, disease diagnosis and so on are all being stored in the digital image. Therefore, to improve the medical information security, securing of patient information and the increasing requirements for communication to be transferred between patients, client, medical practitioners, and sponsors is essential to be secured. The core aim of this research is to make available a complete knowledge about the research trends on LSB Steganography Technique, which are applied to securing medical information such as text, image, audio, video and graphics and also discuss the efficiency of the LSB technique. The survey findings show that LSB steganography technique is efficient in securing medical information from intruder.
10 Insightful Quotes On Designing A Better Customer ExperienceYuan Wang
In an ever-changing landscape of one digital disruption after another, companies and organisations are looking for new ways to understand their target markets and engage them better. Increasingly they invest in user experience (UX) and customer experience design (CX) capabilities by working with a specialist UX agency or developing their own UX lab. Some UX practitioners are touting leaner and faster ways of developing customer-centric products and services, via methodologies such as guerilla research, rapid prototyping and Agile UX. Others seek innovation and fulfilment by spending more time in research, being more inclusive, and designing for social goods.
Experience is more than just an interface. It is a relationship, as well as a series of touch points between your brand and your customer. Here are our top 10 highlights and takeaways from the recent UX Australia conference to help you transform your customer experience design.
For full article, continue reading at https://yump.com.au/10-ways-supercharge-customer-experience-design/
How to Build a Dynamic Social Media PlanPost Planner
Stop guessing and wasting your time on networks and strategies that don’t work!
Join Rebekah Radice and Katie Lance to learn how to optimize your social networks, the best kept secrets for hot content, top time management tools, and much more!
Watch the replay here: bit.ly/socialmedia-plan
http://inarocket.com
Learn BEM fundamentals as fast as possible. What is BEM (Block, element, modifier), BEM syntax, how it works with a real example, etc.
20 Ideas for your Website Homepage ContentBarry Feldman
Perplexed about what to put on your website home? Every company deals with this tough challenge. The 20 ideas in this presentation should give you a strong starting point.
Content personalisation is becoming more prevalent. A site, it's content and/or it's products, change dynamically according to the specific needs of the user. SEO needs to ensure we do not fall behind of this trend.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
Pattern recognition using context dependent memory model (cdmm) in multimodal...ijfcstjournal
Pattern recognition is one of the prime concepts in current technologies in both private and public sectors.
The analysis and recognition of two or more patterns is a complex task due to several factors. The
consideration of two or more patterns requires huge space for keeping the storage media as well as
computational aspect. Vector logic gives very good strategy for recognition of patterns. This paper
proposes pattern recognition in multimodal authentication system with the use of vector logic and makes
the computation model hard and less error rate. Using PCA two to three biometric patterns will be fusion
and then various key sizes will be extracted using LU factorization approach. The selected keys will be
combined using vector logic, which introduces a memory model often called Context Dependent Memory
Model (CDMM) as computational model in multimodal authentication system that gives very accurate and
very effective outcome for authentication as well as verification. In the verification step, Mean Square
Error (MSE) and Normalized Correlation (NC) as metrics to minimize the error rate for the proposed
model and the performance analysis will be presented.
A HUMAN-CENTRIC APPROACH TO GROUP-BASED CONTEXT-AWARENESSIJNSA Journal
The emerging need for qualitative approaches in context-aware information processing calls for proper modelling of context information and efficient handling of its inherent uncertainty resulted from human interpretation and usage. Many of the current approaches to context-awareness either lack a solid theoretical basis for modelling or ignore important requirements such as modularity, high-order uncertainty management and group-based context-awareness. Therefore, their real-world application and extendibility remains limited. In this paper, we present f-Context as a service-based contextawareness framework, based on language-action perspective (LAP) theory for modelling. Then we identify some of the complex, informational parts of context which contain high-order uncertainties due to differences between members of the group in defining them. An agent-based perceptual computer architecture is proposed for implementing f-Context that uses computing with words (CWW) for handling uncertainty. The feasibility of f-Context is analyzed using a realistic scenario involving a group of mobile users. We believe that the proposed approach can open the door to future research on context-awareness by offering a theoretical foundation based on human communication, and a service-based layered architecture which exploits CWW for context-aware, group-based and platform-independent access to information systems.
Proactive Intelligent Home System Using Contextual Information and Neural Net...IJERA Editor
Nowadays, cities around the world intend to use information technology to improve the lives of their citizens.
Future smart cities will incorporate digital data and technology to interact differently with their human
inhabitants.
Among the key component of a smart city, we find the smart home component. It is an autonomic environment
that can provide various smart services by considering the user’s context information. Several methods are used
in context-aware system to provide such services. In this paper, we propose an approach to offer the most
relevant services to the user according to any significant change of his context environment. The proposed
approach is based on the use of context history information together with user profiling and machine learning
techniques. Experimentations show that the proposed solution can efficiently provide the most useful services to
the user in an intelligent home environment.
The technologies of ai used in different corporate worldEr. rahul abhishek
Artificial intelligence (AI) is making its way back into the mainstream of corporate technology, this time at the core of business systems which are providing competitive advantage in all sorts of industries, including electronics, manufacturing, software, medicine, entertainment, engineering and communications, designed to leverage the capabilities of humans rather than replace them, today’s AI technology enables an extraordinary array of applications that forge new connections among people, computers, knowledge, and the physical world. Some AI enabled applications are information distribution and retrieval, database mining, product design, manufacturing, inspection, training, user support, surgical planning, resource scheduling, and complex resource management.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
Data Mining Framework for Network Intrusion Detection using Efficient TechniquesIJAEMSJORNAL
The implementation measures the classification accuracy on benchmark datasets after combining SIS and ANNs. In order to put a number on the gains made by using SIS as a strategic tool in data mining, extensive experiments and analyses are carried out. The predicted results of this investigation will have implications for both theoretical and applied settings. Predictive models in a wide variety of disciplines may benefit from the enhanced classification accuracy enabled by SIS inside ANNs. An invaluable resource for scholars and practitioners in the fields of AI and data mining, this study adds to the continuing conversation about how to maximize the efficacy of machine learning methods.
Pattern Recognition using Artificial Neural NetworkEditor IJCATR
An artificial neural network (ANN) usually called neural network. It can be considered as a resemblance to a paradigm
which is inspired by biological nervous system. In network the signals are transmitted by the means of connections links. The links
possess an associated way which is multiplied along with the incoming signal. The output signal is obtained by applying activation to
the net input NN are one of the most exciting and challenging research areas. As ANN mature into commercial systems, they are likely
to be implemented in hardware. Their fault tolerance and reliability are therefore vital to the functioning of the system in which they
are embedded. The pattern recognition system is implemented with Back propagation network and Hopfield network to remove the
distortion from the input. The Hopfield network has high fault tolerance which supports this system to get the accurate output.
The Survey of Data Mining Applications And Feature Scope IJCSEIT Journal
In this paper we have focused a variety of techniques, approaches and different areas of the research which
are helpful and marked as the important field of data mining Technologies. As we are aware that many MNC’s
and large organizations are operated in different places of the different countries. Each place of operation
may generate large volumes of data. Corporate decision makers require access from all such sources and
take strategic decisions .The data warehouse is used in the significant business value by improving the
effectiveness of managerial decision-making. In an uncertain and highly competitive business
environment, the value of strategic information systems such as these are easily recognized however in
today’s business environment, efficiency or speed is not the only key for competitiveness. This type of huge
amount of data’s are available in the form of tera- to peta-bytes which has drastically changed in the areas
of science and engineering. To analyze, manage and make a decision of such type of huge amount of data
we need techniques called the data mining which will transforming in many fields. This paper imparts more
number of applications of the data mining and also o focuses scope of the data mining which will helpful in
the further research.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
A scenario based approach for dealing with
1. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
DOI:10.5121/ijcsa.2014.4204 31
A Scenario Based Approach For Dealing With
Challenges In A Pervasive Computing
Environment
Divyajyothi M G1
, Rachappa2
and Dr. D H Rao3
1,2
Research Scholar, Department of Computer Science, Jain University, Bangalore
3
Principal and Director, Jain College of Engineering, Belgaum
ABSTRACT
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
KEYWORDS
Pervasive computing, Multimedia retrieval, Modelling uncertainty, Fuzzy theory
1. INTRODUCTION
Tremendous growth and research contributions are going on towards Mark Weiser’s vision [4][5]
on developing a system that can sense, compute and communicate in a way that can make human
life easy with smart objects assisting from around his surroundings. In this process of migrating
towards a smart environment, the real challenges to ponder upon are the performance issues, data
management, software maintenance, energy efficiency, trust, security and privacy of the
computing device to be designed[1][2][3][6][7]. The Demand for pervasive multimedia services
has widely increased with Ubiquitous computing being expected in almost all areas of health care,
entertainment, digital libraries, hotels, class rooms, Smart campuses, automobiles, streets,
airports, social networks. With the amount of multimedia data we have, data analysis, indexing,
retrieval, distribution, and management becomes even more challenging subject to the fact of
adopting all these requirements that suits the human being mode of thinking, expectations and
vision [9].This is where context aware multimedia computing becomes extremely important. In
this paper, the aspects that make the multimedia retrieval challenging is discussed. An example
scenario is presented with explanation on tools that can be readily used to model uncertainty are
described.
2. MULTIMEDIA RETRIEVAL CHALLENGES
2.1. Real –Time Constraints
Multimedia involves very large amounts of data. Multimedia retrieval refers to extracting
semantic information from this large amount of data available in various forms. Thus multimedia
2. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
32
retrieval needs efficient techniques and algorithms for redundancy elimination, feature extraction
and categorization.
2.2. Bridging the Semantic Gap
There is a semantic gap between the semantics the searcher attaches to the visual or any
multimedia information and the semantics which is extracted from the digital information stored
for the multimedia objects. The semantic content needs to be structured and summarized and
therefore is a daunting task.
2.3. “Multi” in Multimedia
Analysing characteristics of all the possible myriad forms of media and retrieving it in a standard
way from these vast amounts of multimedia information for all of them is quite a daunting task.
Some forms of multimedia: Text, Sketches, Videos, Graphical Images, Speech, Sound, Movies.
2.4. Effective Multimedia Search Engines
Need an effective search mechanism while answering a query to a document database. The most
common problems, which may occur in this process, are synonymy and polysemy.
Synonymy – Search Engine detecting any subject S that may not exactly present in any article A.
Polysemy -- some words may have many meanings.
2.5. Protocols for Multimedia Networks
For Instance, in order to constantly receive unobtrusive connectivity and response from network
devices embedded in the environment, the computing speed should be invariably good and this
can be made possible if parallel computation can take place.
2.6. Multimedia languages for Multi-Channel Content
We need more possible ways to distribute our content video, audio, Speech, images, Text, for
excellent ubiquitous behaviour. Efficient scalable codecs needs to be proposed for effective
universal multimedia access.
2.7. Multimedia Services for Intelligent Pervasive Computing
It’s necessary to address the security and privacy issues involved while performing secure
transactions in a ubiquitous environment. More hybrid security measures have to be implemented
keeping in mind the performance of devices that are used in pervasive computing environments.
2.8. Scalable Algorithms / Techniques
It’s time we think about whether the existing retrieval, indexing, mining, streaming, delivery,
personalization algorithms are easily feasible with compact pervasive devices. If not new
algorithms have to be formulated based on the existing foundations we have. In such cases
quantum mechanics techniques may provide useful results.
3. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
33
3. EXAMPLE SCENARIO
3.1. Complete Pattern Recognition System
3.1.1 Sensor for Feature Extraction
Requires a sensor for Feature Extraction mechanism, requires a Classification / Description
Scheme which is based on learning strategy/paradigms.
a. Supervised Learning – based on availability of a training set (set of patterns).
b. Unsupervised Learning – based on statistical regularities of patterns Decision
Theoretic.
c.
3.1.2 Classification Scheme
Requires a Classification / Description Scheme which uses one of the following:
a. Statistical (Decision theoretic) is based on statistical characterizations of patterns which
are generated by a probabilistic system. For a Probabilistic system, naive Bayes classifier
can be very effectively used because it works on the availability of a particular training
set irrespective of other features and its results are based on maximum likelihood
property. Example for document classification: Documents can be classified based on
many ways by their content, subject, Text and Graphical portions, Images and plain text,
mathematical equations and numbers, noun sorters, mass dividers. In general, each
document classification has its own classification challenges.
b. Syntactic (Structural) – based on structural interrelationships of features, should have a
clear structure of the patterns. An appropriate grammar is the core of any type of
syntactic pattern recognition process. One must make sure that the Grammars are
established from a priori knowledge about the objects or scenes to be recognized.
Syntactic Classifiers: Representing structural information in images can be effectively
done using Fuzzy set Theory. All imprecise relationships between objects can be defined
as spatial fuzzy sets. Formal language can be readily used to represent such structures.
Fuzzy grammars can be generated for the objects to be recognized. All differences in the
structures of the classes must be encoded as different grammars. Another example can be
Diagnosis of the heart using ECG measurements.
c. Neural Classifiers - based on bionics-related concepts in recognizing patterns. Bionics
refers to the science of applying biological concepts to electronic machines. This concept
is made use by the neural approach that applies these biological concepts to machines in
order to recognize patterns. As an outcome of this effort that field of artificial neural
networks has emerged and we can see some interesting results.
Neural Classifiers: Neuro Excel Classifier is one of the most efficient, quick, powerful
and an easy-to-use neural network software tool that is most widely being used for
classifying data in Microsoft Excel. The main objective of this classifier is to aid experts
in the design process of real-world data mining and pattern recognition tasks. One of the
major benefits of this classifier is its ability to hide the underlying density, Thickness,
complexity of neural network processes by providing graphs and statistics for the user so
that the results can be easily understood. The algorithms and techniques used in Neuro
Excel Classifiers are only those which are reliable and which are proven to be efficient.
Another feature is its ability to amalgamate flawlessly with Microsoft Excel. [8]
Apply appropriate Algorithms for the pattern recognition based on the target system.
4. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
34
4. MODELLING UNCERTAINTY
Every Situation in the world around us can be represented as a mathematical model. All these
models are established using the building blocks of Set Theory. Set theory is a branch of
mathematics that deals with the properties of sets. According to classical set theory, the
membership of elements in a set is based on a bivalent condition, stating that an element may
either belong or may not belong to the set.
Generalization of the classical set theory gives us the fuzzy set theory, in which the membership
of elements in a set is described with the aid of a membership function valued in the real unit
interval [0, 1]. The method of assessing the membership of elements in classical set theory is
based on a crisp condition of whether the element belongs to or does not belong to any given set
S. Therefore the entire process is being done in binary terms.
4.1. Fuzzy Set Theory
Fuzzy sets are considered to be an extension of classical set theory. Such imprecise concepts,
classical set theory fails to handle since it uses the principle of bivalent condition. So when it
comes to expert systems and recognition system, Fuzzy set theory serves better as it has the
ability of handling effectively the most inherent and imprecise concepts. It is most widely used
mathematical method in modern research because it is much more organised and can handle
imperfect knowledge and vagueness in an intelligent manner. We should be aware that not all
concepts can be converted in the form of an equation. For example, consider the problem of
expressing the term “hotness” in the form of a mathematical equation. Since, “hotness” is not a
quantity; it cannot be expressed in the form of an equation. But still if one asks common people
they have an idea of what is "hot”, and agree that there is no sharp cut-off between "hot" and "not
hot", where something is "hot" at N degrees but "not hot" at N-1 degrees.
Thus we can define a fuzzy set on a classical set Χ is defined as follows:
In the above equation, the membership function μA(x) enumerates whether the elements x belong
to the fundamental set X or not. Based on the bivalent condition, the elements in the set can either
take the value of 1 or 0. If the element is seen mapping to the value 0, it means that the element
does not belong to the given set X. Likewise if the element is seen mapping to 1, it is fully a
member of the set. The elements that range in between are said to be the fuzzy members.
Consider a fuzzy set C, where C = {(3,0.3), (4,0.7), (5,1), (6,0.4)}. Using standard fuzzy set
theory notation, this would be computed as C = {0.3/3, 0.7/4, 1/5, 0.4/6} From the above
notation, it can be observed that any value with a membership grade of zero does not appear in
the expression of the set. Finding the membership grade of the fuzzy set C at 6 using the standard
notation is μB(6) = 0.4.
5. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
35
Figure1: Fuzzy Set and Crisp Set
4.1.1 Degrees of Truth
The real numbers in the interval [0, 1] are usually used to represent the Degrees of Truth.
If the values are the extreme points (0 and 1), then they represent absolute falsity and absolute
truth respectively whereas the values ranging in between the extreme points (0 and 1), represent
the intermediate truth degrees.
Hence, when a system of logic uses these degrees of truth it is called fuzzy logic. The logical
operations in such systems are typically defined as follows:
¬P = 1 - P
P ∨ Q = max(P, Q)
P ∧ Q = min(P, Q)
As a result, such systems employing degrees of truth allow us to assess sentences involving more
than one vague and myriad property such as warm, old, strong, fit, happy, bright, sad, cold, and
coldest and so forth. As a result we have a technique to potentially tackle vagueness. [11][12][13]
4.1.2 Logic based on Fuzzy Sets
Logic based on the concept of fuzzy sets, in that any membership that is expressed using varying
probabilities or degrees of truth with values ranging from 0 (does not occur) to 1 (definitely
occurs).
There are a lot of practical uses that Fuzzy logic offers in making a considerable impact in
engineering control systems, computing, medicine and healthcare, where pervasive computing is
allowing self-care rather than professional care [14].
It is more convenient to programming a series of logical conditions and corresponding actions
into a machine. Using fuzzy logic for the same allows the values of the propositions involved to
come straight away from the machine's sensors.
Few Examples are the usage of sensors in devices such as thermometers, motion detectors,
refrigerators, washing machines.
6. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
36
Commercial applications of Fuzzy Logic came up as early as in the 1990s. Many other products
using fuzzy logic include camcorders, microwave ovens, Toasters, Vacuum cleaners,
dishwashers. Other applications include expert systems in a pervasive environment using multiple
sensors, interactive devices, and computerized speech- and handwriting-recognition programs
with Interfaces for modes of interactions between people and pervasive computing devices.
5. SET THEORY APPLICATION AREAS
5.1. Electronic Commerce
Rough set theory is used nowadays in Electronic Commerce (EC) for data mining. Everyday large
volumes of data are collected by the EC sites. This data comprises a lot of secure, valuable
information about customers, products, transactions, communication address and so on. This large
amount of data can be better used by the site management to extract the unknown knowledge or
trends hiding in the data, and to arrange their products according to the buyers' preference and
take appropriate selling, security and authentication policies.
5.2. Pattern Recognition
Rough set theory is widely used in combination with neural network theory for pattern
recognition. Using All Set Theory and Set Pair Analysis (SPA) many new methods of pattern
recognition have been proposed. These methods have proven to be much more valid and effective
when compared to existing conventional ways of pattern recognition.
5.3. Computer Networks
Rough Set theory has also found its wide application in computer network fault diagnosis,
anomaly intrusion detection in computer networks. It has served as an excellent tool for dealing
with vagueness and imperfections.
5.3. Pervasive Computing
Lot of Service Match Making Algorithms for Pervasive Computing are based on Rough Set
Theory. Also Rough Set theory is being used for creating a user aware TV program and Settings
in a pervasive environment [10].
6. CONCLUSION
Most of the time, probability is being confused with Degrees of truth. This should not be the case.
Consider the task of flipping a coin. It would be incorrect to say that this task has a 50%-50%
chance of being or not being in F. Though flipping a coin definitely results either in a heads or
tails, giving it a 50% chance is incorrect. So this random event must be given the value 1 for its
degree of truth because irrespective of the situation, one of the sides either heads or tails
definitely appears. Degrees of truth should also never be confused with an unknown or varying
truth value.
For Example, Consider the sentence-“The month of July is usually a monsoon day in some parts
of the world”. Though the degree of truth value does not fall on the extreme points 0 or 1, it can
be still considered as a definite value. Repeated observations made on the same day do not give
us different values. Thus an alternative to probability theory can be the mathematical theory
known as possibility theory which can be used to deal with certain types of uncertainty. This
7. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
37
theory was first introduced in 1978. Basically this theory was developed as an extension to the
fuzzy logic and fuzzy set theory by Prof Zadeh. This theory makes use of the necessity and
possibility of any random event whereas the probability theory makes use of only probability to
decide on the likeliness of an event to take place.
Choosing the ELECTIC VIEW can be an alternate solution, which agrees / accepts both the
interpretations: depending on the situation, one has to select one of the two interpretations for
pragmatic, or principled, reasons.
7. ACKNOWLEDGMENTS
We would like to express our gratitude to the person who made completion of this paper possible.
We are deeply indebted to our Prof. Dr. D. H. Rao for his constant help, suggestions and
motivation to study and work. His tremendous knowledge and stimulating suggestions has helped
us a lot in completing our work effectively.
8. REFERENCES
[1] S.K. Das, K. Kant, N.Zhang. Handbook on Securing Cyber-Pysical Critical Infrastructure: Foundation
and Challenges, Morgan Kauffman, 2012.
[2] N.Roy, A.Misra, C.Julien, S.K.Das, J.Biswas. An Energy efficient Quality Adaptive Multi Modal
Sensor Framework for Context Recognition. Proceedings of IEEE International Conference on
Pervasive Computing and communications, pp.63-75, 2011.Tavel, P. 2007. Modeling and Simulation
Design. AK Peters Ltd., Natick, MA.
[3] R.Rajkumar and I.Lee. NSF workshop on cyber-Physical systems.DOI =
http://varma.ece.cmu.edu/cps/, October 2011
[4] M. Weiser. The computer for the twenty-first century. Scientific American, 265(3): 94-104, 1991.
[5] Juan Ye and Simon Dobson. “Pervasive Computing needs better situation –awareness” ,
doi:10.2417/3201201.003943
[6] G.Hayes, E.Poole, G.lachello, S.Patel, a.Grimes, G.Abowd, and K.Truong. Physical, social and
experimental knowledge of privacy and security in a pervasive computing environment. IEEE
Pervasive Computing, 6(4):56-63, 2007
[7] A.T. Campbell, s.B. eiseman, N.D. lane, E.Miluzzo, R.A.Peterson, H.Lu, X.Zheng, M.Musoles,
K.Fodor, and G.S. ahn. The rise of people-centric sensing. IEEE Internet Computing, 12(4):12-21,
2008
[8] Neural Network classifiers , DOI= http://www.ozgrid.com/Services/neuro-excel-classifier.htm
[9] Chris Wellekens, “Special issue on Multimedia Semantic Computing” DOI=
http://www.iscaspeech.org/iscaweb/iscapad/iscapad.php?module=article&id=1132
[10] Thyagaraju G.S, Thyagaraju G.S, “Rough Set Theory Based User Aware TV Program and Settings
Recommender”, 48 International Journal of Advanced Pervasive and Ubiquitous Computing, 4(2), 48-
64, April-June 2012
[11] Novak, V., Perfilieva, i., And mocker, j."mathematical principles of fuzzy logic dodrecht", Kluwer
academic. ISBN 0-7923-8595-0.
[12] “Corn starters”, DOI= http://www.agtest.com/articles, 2000.
[13] Fuller, R., Carlsson, C., “Fuzzy multiple criteria decision making”, Recent developments, Fuzzy
Sets and Systems 78(2) 139- . 153 , 1996
[14] Bingchuan Yuan, John Herbert, Fuzzy CARA - A Fuzzy-Based Context Reasoning System For
Pervasive Healthcare, The 3rd International Conference on Ambient Systems, Networks and
Technologies , Procedia Computer Science 10 ( 2012 ) 357 – 365
8. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
38
Authors
Mrs DivyaJyothi M.G. is currently working as Lecturer at the Department of
Information Technology, Al Musanna College of Technology, Sultanate of Oman.
Her teaching interests include Pervasive Computing, Firewalls and Internet
Security Risks, E-Commerce, Computer Networks, Intrusion detection System,
Network Security and Cryptography, Internet Protocols, Client Server Computing,
Unix internals, Linux internal, Kernel Programming, Object Oriented Analysis and
Design, Programming Languages, Operating Systems, Image Processing, Web
Design and Development, etc. Her most recent research focus is in the area of
Pervasive Computing. She received her Bachelor and Master Degree in Computer Science from Mangalore
University, She bagged First Rank in Master’s Degree at Mangalore University. She has been associated as
a Lecturer of the Department of Information Technology since 2007. She has worked as Lecturer at ICFAI
Tech., Bangalore, T John College for MCA, Bangalore, Alva’s Education Foundation Mangalore. She has
guided many project thesis for UG/PG level.
Mr. Rachappa is currently working as Lecturer at the Department of Information
Technology, Al Musanna College of Technology, Sultanate of Oman. His
teaching interests include Computer Security, Pervasive Computing, E-
Commerce, Computer Networks, Intrusion detection System, Network Security
and Cryptography, Internet Protocols, Client Server Computing, Unix internals,
Linux internal, Kernel Programming, Object Oriented Analysis and Design,
Programming Languages, Operating Systems, Web Design and Development, etc.
His most recent research focus is in the area of Security Challenges in Pervasive
Computing. He received his Bachelor Degree in Computer Science from Gulbarga University, Master of
Science Degree from Marathwada University and Master of Technology in Information Technology Degree
from Punjabi University (GGSIIT). He has been associated as a Lecturer of the Department of Information
Technology since 2006. He has worked as Lecturer at R.V. College of Engineering, Bangalore. He has
guided many project thesis for UG/PG level. He is a Life member of CSI, ISTE.
Dr. D H Rao is currently working as a Dean, Faculty of Engineering, VTU,
Belgaum. Principal and Director, Jain College of Engineering, Belgaum.He is the
Chairman, Board of Studies in E & C Engineering, VTU in Belgaum. He is a
Member, Academic Senate in VTU Belgaum. He has over 100+ publications in
reputed journals and conferences. He obtained B.E. (in Electronics from B.M.S.
College of Engineering), M.E. (from Madras University), M.S. (University of
Saskatchewan, Canada) Ph.D. (Univ. of Saskatchewan, Canada).