Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
Representing Non-Relational Databases with Darwinian NetworksIJERA Editor
The Darwinian networks (DNs) are first introduced by Dr Butz [1] to simplify and clarify how to work with Bayesian networks (BNs). DNs can unify modeling and reasoning tasks into a single platform using the graphical manipulation of the probability tables that takes on a biological feel. From this view of the DNs, we propose a graphical library to represent and depict non-relational databases using DNs. Because of the growing of this kind of databases, we need even more tools to help in the management work, and the DNs can help with these tasks.
Representing Non-Relational Databases with Darwinian NetworksIJERA Editor
The Darwinian networks (DNs) are first introduced by Dr Butz [1] to simplify and clarify how to work with Bayesian networks (BNs). DNs can unify modeling and reasoning tasks into a single platform using the graphical manipulation of the probability tables that takes on a biological feel. From this view of the DNs, we propose a graphical library to represent and depict non-relational databases using DNs. Because of the growing of this kind of databases, we need even more tools to help in the management work, and the DNs can help with these tasks.
DATABASE SYSTEMS PERFORMANCE EVALUATION FOR IOT APPLICATIONSijdms
ABSTRACT
The amount of data stored in IoT databases increases as the IoT applications extend throughout smart city appliances, industry and agriculture. Contemporary database systems must process huge amounts of sensory and actuator data in real-time or interactively. Facing this first wave of IoT revolution, database vendors struggle day-by-day in order to gain more market share, develop new capabilities and attempt to overcome the disadvantages of previous releases, while providing features for the IoT.
There are two popular database types: The Relational Database Management Systems and NoSQL databases, with NoSQL gaining ground on IoT data storage. In the context of this paper these two types are examined. Focusing on open source databases, the authors experiment on IoT data sets and pose an answer to the question which one performs better than the other. It is a comparative study on the performance of the commonly market used open source databases, presenting results for the NoSQL MongoDB database and SQL databases of MySQL and PostgreSQL
Comparative study of no sql document, column store databases and evaluation o...ijdms
In the last decade, rapid growth in mobile applications, web technologies, social media generating
unstructured data has led to the advent of various nosql data stores. Demands of web scale are in
increasing trend everyday and nosql databases are evolving to meet up with stern big data requirements.
The purpose of this paper is to explore nosql technologies and present a comparative study of document
and column store nosql databases such as cassandra, MongoDB and Hbase in various attributes of
relational and distributed database system principles. Detailed study and analysis of architecture and
internal working cassandra, Mongo DB and HBase is done theoretically and core concepts are depicted.
This paper also presents evaluation of cassandra for an industry specific use case and results are
published.
Key aspects of big data storage and its architectureRahul Chaturvedi
This paper helps understand the tools and technologies related to a classic BigData setting. Someone who reads this paper, especially Enterprise Architects, will find it helpful in choosing several BigData database technologies in a Hadoop architecture.
An overview of the Database Management System, various uses and applications of database, internal architecture of popular RDBMS servers and thier features
Analysis and evaluation of riak kv cluster environment using basho benchStevenChike
Many institutions and companies with technological development have been producing large size of structured and unstructured data. Therefore, we need special databases to deal with these data and thus emerged NoSQL databases. They are widely used in the cloud databases and the distributed systems. In the era of big data, those databases provide a scalable high availability solution. So we need new architectures to try to meet the need to store more and more different kinds of different data. In order to arrive at a good structure of large and diverse data, this structure must be tested and analyzed in depth with the use of different benchmark tools. In this paper, we experiment the Riak key-value database to measure their performance in terms of throughput and latency, where huge amounts of data are stored and retrieved in different sizes in a distributed database environment. Throughput and latency of the NoSQL database over different types of experiments and different sizes of data are compared and then results were discussed.
Data Ware House System in Cloud EnvironmentIJERA Editor
To reduce Cost of data ware house deployment , virtualization is very Important. virtualization can reduce Cost
and as well as tremendous Pressure of managing devices, Storages Servers, application models & main Power.
In current time, data were house is more effective and important Concepts that can make much impact in
decision support system in Organization. Data ware house system takes large amount of time, cost and efforts
then data base system to Deploy and develop in house system for an Organization . Due to this reason that,
people now think about cloud computing as a solution of the problem instead of implementing their own data
were house system . In this paper, how cloud environment can be established as an alternative of data ware
house system. It will given the some knowledge about better environment choice for the organizational need.
Organizational Data were house and EC2 (elastic cloud computing ) are discussed with different parameter like
ROI, Security, scalability, robustness of data, maintained of system etc
SURVEY ON IMPLEMANTATION OF COLUMN ORIENTED NOSQL DATA STORES ( BIGTABLE & CA...IJCERT JOURNAL
NOSQL is a database provides a mechanism for storage and retrieval of data that is modeled for huge amount of data which is used in big data and Cloud Computing . NOSQL systems are also called "Not only SQL" to emphasize that they may support SQL-like query languages. A basic classification of NOSQL is based on data model; they are like column, Document, Key-Value etc. The objective of this paper is to study and compare the implantation of various column oriented data stores like Bigtable, Cassandra.
HYBRID DATABASE SYSTEM FOR BIG DATA STORAGE AND MANAGEMENTIJCSEA Journal
Relational database systems have been the standard storage system over the last forty years. Recently,
advancements in technologies have led to an exponential increase in data volume, velocity and variety
beyond what relational databases can handle. Developers are turning to NoSQL which is a non- relational
database for data storage and management. Some core features of database system such as ACID have
been compromised in NOSQL databases. This work proposed a hybrid database system for the storage and
management of extremely voluminous data of diverse components known as big data, such that the two
models are integrated in one system to eliminate the limitations of the individual systems. The system is
implemented in MongoDB which is a NoSQL database and SQL. The results obtained, revealed that having
these two databases in one system can enhance storage and management of big data bridging the gap
between relational and NoSQL storage approach.
HYBRID DATABASE SYSTEM FOR BIG DATA STORAGE AND MANAGEMENTIJCSEA Journal
Relational database systems have been the standard storage system over the last forty years. Recently, advancements in technologies have led to an exponential increase in data volume, velocity and variety beyond what relational databases can handle. Developers are turning to NoSQL which is a non- relational database for data storage and management. Some core features of database system such as ACID have been compromised in NOSQL databases. This work proposed a hybrid database system for the storage and management of extremely voluminous data of diverse components known as big data, such that the two models are integrated in one system to eliminate the limitations of the individual systems. The system is
implemented in MongoDB which is a NoSQL database and SQL. The results obtained, revealed that having these two databases in one system can enhance storage and management of big data bridging the gap between relational and NoSQL storage approach.
DATABASE SYSTEMS PERFORMANCE EVALUATION FOR IOT APPLICATIONSijdms
ABSTRACT
The amount of data stored in IoT databases increases as the IoT applications extend throughout smart city appliances, industry and agriculture. Contemporary database systems must process huge amounts of sensory and actuator data in real-time or interactively. Facing this first wave of IoT revolution, database vendors struggle day-by-day in order to gain more market share, develop new capabilities and attempt to overcome the disadvantages of previous releases, while providing features for the IoT.
There are two popular database types: The Relational Database Management Systems and NoSQL databases, with NoSQL gaining ground on IoT data storage. In the context of this paper these two types are examined. Focusing on open source databases, the authors experiment on IoT data sets and pose an answer to the question which one performs better than the other. It is a comparative study on the performance of the commonly market used open source databases, presenting results for the NoSQL MongoDB database and SQL databases of MySQL and PostgreSQL
Comparative study of no sql document, column store databases and evaluation o...ijdms
In the last decade, rapid growth in mobile applications, web technologies, social media generating
unstructured data has led to the advent of various nosql data stores. Demands of web scale are in
increasing trend everyday and nosql databases are evolving to meet up with stern big data requirements.
The purpose of this paper is to explore nosql technologies and present a comparative study of document
and column store nosql databases such as cassandra, MongoDB and Hbase in various attributes of
relational and distributed database system principles. Detailed study and analysis of architecture and
internal working cassandra, Mongo DB and HBase is done theoretically and core concepts are depicted.
This paper also presents evaluation of cassandra for an industry specific use case and results are
published.
Key aspects of big data storage and its architectureRahul Chaturvedi
This paper helps understand the tools and technologies related to a classic BigData setting. Someone who reads this paper, especially Enterprise Architects, will find it helpful in choosing several BigData database technologies in a Hadoop architecture.
An overview of the Database Management System, various uses and applications of database, internal architecture of popular RDBMS servers and thier features
Analysis and evaluation of riak kv cluster environment using basho benchStevenChike
Many institutions and companies with technological development have been producing large size of structured and unstructured data. Therefore, we need special databases to deal with these data and thus emerged NoSQL databases. They are widely used in the cloud databases and the distributed systems. In the era of big data, those databases provide a scalable high availability solution. So we need new architectures to try to meet the need to store more and more different kinds of different data. In order to arrive at a good structure of large and diverse data, this structure must be tested and analyzed in depth with the use of different benchmark tools. In this paper, we experiment the Riak key-value database to measure their performance in terms of throughput and latency, where huge amounts of data are stored and retrieved in different sizes in a distributed database environment. Throughput and latency of the NoSQL database over different types of experiments and different sizes of data are compared and then results were discussed.
Data Ware House System in Cloud EnvironmentIJERA Editor
To reduce Cost of data ware house deployment , virtualization is very Important. virtualization can reduce Cost
and as well as tremendous Pressure of managing devices, Storages Servers, application models & main Power.
In current time, data were house is more effective and important Concepts that can make much impact in
decision support system in Organization. Data ware house system takes large amount of time, cost and efforts
then data base system to Deploy and develop in house system for an Organization . Due to this reason that,
people now think about cloud computing as a solution of the problem instead of implementing their own data
were house system . In this paper, how cloud environment can be established as an alternative of data ware
house system. It will given the some knowledge about better environment choice for the organizational need.
Organizational Data were house and EC2 (elastic cloud computing ) are discussed with different parameter like
ROI, Security, scalability, robustness of data, maintained of system etc
SURVEY ON IMPLEMANTATION OF COLUMN ORIENTED NOSQL DATA STORES ( BIGTABLE & CA...IJCERT JOURNAL
NOSQL is a database provides a mechanism for storage and retrieval of data that is modeled for huge amount of data which is used in big data and Cloud Computing . NOSQL systems are also called "Not only SQL" to emphasize that they may support SQL-like query languages. A basic classification of NOSQL is based on data model; they are like column, Document, Key-Value etc. The objective of this paper is to study and compare the implantation of various column oriented data stores like Bigtable, Cassandra.
HYBRID DATABASE SYSTEM FOR BIG DATA STORAGE AND MANAGEMENTIJCSEA Journal
Relational database systems have been the standard storage system over the last forty years. Recently,
advancements in technologies have led to an exponential increase in data volume, velocity and variety
beyond what relational databases can handle. Developers are turning to NoSQL which is a non- relational
database for data storage and management. Some core features of database system such as ACID have
been compromised in NOSQL databases. This work proposed a hybrid database system for the storage and
management of extremely voluminous data of diverse components known as big data, such that the two
models are integrated in one system to eliminate the limitations of the individual systems. The system is
implemented in MongoDB which is a NoSQL database and SQL. The results obtained, revealed that having
these two databases in one system can enhance storage and management of big data bridging the gap
between relational and NoSQL storage approach.
HYBRID DATABASE SYSTEM FOR BIG DATA STORAGE AND MANAGEMENTIJCSEA Journal
Relational database systems have been the standard storage system over the last forty years. Recently, advancements in technologies have led to an exponential increase in data volume, velocity and variety beyond what relational databases can handle. Developers are turning to NoSQL which is a non- relational database for data storage and management. Some core features of database system such as ACID have been compromised in NOSQL databases. This work proposed a hybrid database system for the storage and management of extremely voluminous data of diverse components known as big data, such that the two models are integrated in one system to eliminate the limitations of the individual systems. The system is
implemented in MongoDB which is a NoSQL database and SQL. The results obtained, revealed that having these two databases in one system can enhance storage and management of big data bridging the gap between relational and NoSQL storage approach.
This ppt explain about choosing your NoSQL database. This also contains factors which needs to be consider while choosing NoSQL database. Thanks Arun Chandrasekaran(https://www.linkedin.com/profile/view?id=AAMAAAQKxWsB9tkk7s2ll2T2BvLvR9QDv_OdJXs&trk=hp-identity-name) for helping me.
A database management system (DBMS) is a software application that allows users to store, organize, and manage large amounts of data in a structured and efficient manner. DBMS provides a centralized repository for data that can be accessed and manipulated by multiple users and applications simultaneously.
The primary functions of a DBMS include data storage, data retrieval, data security, and data integrity. DBMS allows users to define, create, and manipulate data using a variety of tools and interfaces, such as SQL queries, forms, and reports.
DBMS typically include features such as transaction management, concurrency control, backup and recovery, and query optimization to ensure the efficient and reliable operation of the system.
DBMS can be categorized into different types based on their architecture, such as relational, object-oriented, and NoSQL. Each type of DBMS has its own strengths and weaknesses, and the choice of DBMS depends on the specific requirements of the application.
Overall, a DBMS plays a critical role in managing large and complex data sets, and it is an essential tool for organizations that need to store, access, and analyze large volumes of data efficiently and effectively.
Very basic Introduction to Big Data. Touches on what it is, characteristics, some examples of Big Data frameworks. Hadoop 2.0 example - Yarn, HDFS and Map-Reduce with Zookeeper.
Design and Implementation of Smart Cooking Based on Amazon EchoIJSCAI Journal
Smart cooking based on Amazon Echo uses the internet of things and cloud computing to assist in cooking
food. People may speak to Amazon Echo during the cooking in order to get the information and situation of
the cooking. Amazon Echo recognizes what people say, then transfers the information to the cloud services,
and speaks to people the results that cloud services make by querying the embedded cooking knowledge and
achieving the information of intelligent kitchen devices online. An intelligent food thermometer and its mobile
application are well-designed and implemented to monitor the temperature of cooking food
Forecasting Macroeconomical Indices with Machine Learning : Impartial Analysi...IJSCAI Journal
The importance of economic freedom has often been stressed by supporters of liberalism, but can its actual
effect be observed in a data driven, objective way? To analyze this relation the Economic Freedom of the
World (EFW) index and the Human Development Index (HDI) were examined with modern machine learning algorithms and a wide-ranging approach. Considering the EFW index’s preference of a liberalistic
oriented economic policy, an objective recommendation for creating an economic policy that improves
people’s everyday lives might be derived by the analysis results. It was found that these more advanced
algorithms achieve a considerably stronger correlation between both indices than pure statistical means
yet leave a small room for interpretation towards a counter-liberalistic implementation of demand-driven
economic policy.
Intelligent Electrical Multi Outlets Controlled and Activated by a Data Minin...IJSCAI Journal
In the proposed paper are discussed results of an industry project concerning energy management in
building. Specifically the work analyses the improvement of electrical outlets controlled and activated by a
logic unit and a data mining engine. The engine executes a Long Short-Terms Memory (LSTM) neural
network algorithm able to control, to activate and to disable electrical loads connected to multiple outlets
placed into a building and having defined priorities. The priority rules are grouped into two level: the first
level is related to the outlet, the second one concerns the loads connected to a single outlet. This algorithm,
together with the prediction processing of the logic unit connected to all the outlets, is suitable for alerting
management for cases of threshold overcoming. In this direction is proposed a flow chart applied on three
for three outlets and able to control load matching with defined thresholds. The goal of the paper is to
provide the reading keys of the data mining outputs useful for the energy management and diagnostic of the
electrical network in a building. Finally in the paper are analyzed the correlation between global active
power, global reactive power and energy absorption of loads of the three intelligent outlet. The prediction
and the correlation analyses provide information about load balancing, possible electrical faults and energy
cost optimization.
Nov 2018 Table of contents; current issue -International Journal on Soft Comp...IJSCAI Journal
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI) is an open access peer-reviewed journal that provides an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence, Soft Computing. The Journal looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
6th international conference on artificial intelligence and applications (aia...IJSCAI Journal
6th International Conference on Artificial Intelligence and Applications (AIAP-2019) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Artificial Intelligence and its applications. The Conference looks for significant contributions to all major fields of the Artificial Intelligence, Soft Computing in theoretical and practical aspects. The aim of the Conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Generating images from a text description is as challenging as it is interesting. The Adversarial network
performs in a competitive fashion where the networks are the rivalry of each other. With the introduction of
Generative Adversarial Network, lots of development is happening in the field of Computer Vision. With
generative adversarial networks as the baseline model, studied Stack GAN consisting of two-stage GANS
step-by-step in this paper that could be easily understood. This paper presents visual comparative study of
other models attempting to generate image conditioned on the text description. One sentence can be related
to many images. And to achieve this multi-modal characteristic, conditioning augmentation is also
performed. The performance of Stack-GAN is better in generating images from captions due to its unique
architecture. As it consists of two GANS instead of one, it first draws a rough sketch and then corrects the
defects yielding a high-resolution image.
Temporally Extended Actions For Reinforcement Learning Based Schedulers IJSCAI Journal
Temporally extended actions have been proved to enhance the performance of reinforcement learning
agents. The broader framework of ‘Options’ gives us a flexible way of representing such extended course of
action in Markov decision processes. In this work we try to adapt options framework to model an operating
system scheduler, which is expected not to allow processor stay idle if there is any process ready or waiting
for its execution. A process is allowed to utilize CPU resources for a fixed quantum of time (timeslice) and
subsequent context switch leads to considerable overhead. In this work we try to utilize the historical
performances of a scheduler and try to reduce the number of redundant context switches. We propose a
machine-learning module, based on temporally extended reinforcement-learning agent, to predict a better
performing timeslice. We measure the importance of states, in option framework, by evaluating the impact
of their absence and propose an algorithm to identify such checkpoint states. We present empirical
evaluation of our approach in a maze-world navigation and their implications on "adaptive timeslice
parameter" show efficient throughput time.
Knowledgebase Systems in Neuro Science - A StudyIJSCAI Journal
The improvement of health and nutritional status of the society has been one of the thrust areas for social
developments programmes of the country. The present states of healthcare facilities in India are inadequate
when compared to international standards. The average Indian spending on healthcare is much below the
global average spending. Indian healthcare Industry is growing at the rapid pace of more than 18%, the
fastest in the world. The prospects for Indian healthcare are to the tune of USD 40 billion, while global
market is USD 1660 trillion. India has all the prospects to become medical tourism destination of the
world, because it has a large pool of low-cost scientifically trained technical personal and is one of the
favoured counties for cost effective healthcare. As per the reports of Global Burden of Neurological
Disorders Estimations and Projections survey there is big shortage of neurologist in India and around the
world. So Authors would like to develop an innovative IT based solution to help doctors in rural areas to
gain expertise in Neuro Science and treat patients like expert neurologist. This paper aims to survey the
Soft Computing techniques in treating neural patient’s problems used throughout the world
An Iranian Cash Recognition Assistance System For Visually Impaireds IJSCAI Journal
In economical societies of today, using cash is an inseparable aspect of human’s life. People use cash for
marketing, services, entertainments, bank operations and so on. This huge amount of contact with cash and
the necessity of knowing the monetary value of it caused one of the most challenging problems for visually
impaired people. In this paper we propose a mobile phone based approach to identify monetary value of a
picture taken from a banknote using some image processing and machine vision techniques. While the
developed approach is very fast, it can recognize the value of the banknote by an average accuracy rate of
about 97% and can overcome different challenges like rotation, scaling, collision, illumination changes,
perspective, and some others.
An Experimental Study of Feature Extraction Techniques in Opinion MiningIJSCAI Journal
The feature selection or extraction is the most important task in Opinion mining and Sentimental Analysis
(OSMA) for calculating the polarity score. These scores are used to determine the positive, negative, and
neutral polarity about the product, user reviews, user comments, and etc., in social media for the purpose
of decision making and Business Intelligence to individuals or organizations. In this paper, we have
performed an experimental study for different feature extraction or selection techniques available for
opinion mining task. This experimental study is carried out in four stages. First, the data collection process
has been done from readily available sources. Second, the pre-processing techniques are applied
automatically using the tools to extract the terms, POS (Parts-of-Speech). Third, different feature selection
or extraction techniques are applied over the content. Finally, the empirical study is carried out for
analyzing the sentiment polarity with different features
Monte-Carlo Tree Search For The "Mr Jack" Board Game IJSCAI Journal
Recently the use of the Monte-Carlo Tree Search algorithm, and in particular its most famous
implementation, the Upper Confidence Tree can be seen has a key moment for artificial intelligence in
games. This family of algorithms provides huge improvements in numerous games, such as Go, Havannah,
Hex or Amazon. In this paper we study the use of this algorithm on the game of Mr Jack and in particular
how to deal with a specific decision-making process.Mr Jack is a 2-player game, from the family of board
games. We will present the difficulties of designing an artificial intelligence for this kind of games, and we
show that Monte-Carlo Tree Search is robust enough to be competitive in this game with a smart approach.
Unsupervised learning models of invariant features in images: Recent developm...IJSCAI Journal
Object detection and recognition are important problems in computer vision and pattern recognition
domain. Human beings are able to detect and classify objects effortlessly but replication of this ability on
computer based systems has proved to be a non-trivial task. In particular, despite significant research
efforts focused on meta-heuristic object detection and recognition, robust and reliable object recognition
systems in real time remain elusive. Here we present a survey of one particular approach that has proved
very promising for invariant feature recognition and which is a key initial stage of multi-stage network
architecture methods for the high level task of object recognition.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
Big Data Analytics: Challenges And Applications For Text, Audio, Video, And S...IJSCAI Journal
All types of machine automated systems are generating large amount of data in different forms like
statistical, text, audio, video, sensor, and bio-metric data that emerges the term Big Data. In this paper we
are discussing issues, challenges, and application of these types of Big Data with the consideration of big
data dimensions. Here we are discussing social media data analytics, content based analytics, text data
analytics, audio, and video data analytics their issues and expected application areas. It will motivate
researchers to address these issues of storage, management, and retrieval of data known as Big Data. As
well as the usages of Big Data analytics in India is also highlighted
A Study on Graph Storage Database of NOSQLIJSCAI Journal
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
Estimation Of The Parameters Of Solar Cells From Current-Voltage Characterist...IJSCAI Journal
This paper presents a method for calculating the light generated current, the series resistance, shun
resistance and the two components of the reverse saturation current usually encountered in the double
diode representation of
the solar cell from the experimental values of the current
-
voltage characteristics
of the cell using genetic algorithm. The theory is able to regenerate the above mentioned parameters to
very good accuracy when applied to cell data that was generated from
pre
-
defined parameters. The
method is applied to various types of space quality solar cells and sub cells. All parameters except the
light generated current are seen to be nearly the same in the case of a cell whose characteristics under
illumination and i
n dark were analyzed. The light generated current is nearly equal to the short
-
circuit
current in all cases. The parameters obtained by this method and another method are nearly equal
wherever applicable. The parameters are also shown to represent the cur
rent
-
voltage characteristics
well
Implementation of Folksonomy Based Tag Cloud Model for Information Retrieval ...IJSCAI Journal
In the magnitude of internet one need to devote extra time to investigate an
ticipated resource, especially
when one need to search information from documents. For the higher range internet there is serious need
to demand the essentiality to discover the reserved resources. One of the solutions for information retrieval
from docume
nt repository is to attach tags to documents. Numerous online social bookmarking services
permit users to attach tags with resources which are eventually meta
-
data, frequently stated as folksonomy.
In current paper, authors implemented this model for infor
mation retrieval by utilizing these tags, after
retrieving by using delicious API and synthesize tag cloud in an Indian University to search and retrieve
information from document repository
Study of Distance Measurement Techniques in Context to Prediction Model of We...IJSCAI Journal
Internet is the boon in modern era as every organization uses it for dissemination of information and ecommerce
related applications. Sometimes people of organization feel delay while accessing internet in
spite of proper bandwidth. Prediction model of web caching and prefetching is an ideal solution of this
delay problem. Prediction model analysing history of internet user from server raw log files and determine
future sequence of web objects and placed all web objects to nearer to the user so access latency could be
reduced to some extent and problem of delay is to be solved. To determine sequence of future web objects,
it is necessary to determine proximity of one web object with other by identifying proper distance metric
technique related to web caching and prefetching. This paper studies different distance metric techniques
and concludes that bio informatics based distance metric techniques are ideal in context to Web Caching
and Web Prefetching
A BINARY BAT INSPIRED ALGORITHM FOR THE CLASSIFICATION OF BREAST CANCER DATAIJSCAI Journal
Advancement in information and technology has made a major impact on medical science where the
researchers come up with new ideas for improving the classification rate of various diseases. Breast cancer
is one such disease killing large number of people around the world. Diagnosing the disease at its earliest
instance makes a huge impact on its treatment. The authors propose a Binary Bat Algorithm (BBA) based
Feedforward Neural Network (FNN) hybrid model, where the advantages of BBA and efficiency of FNN is
exploited for the classification of three benchmark breast cancer datasets into malignant and benign cases.
Here BBA is used to generate a V-shaped hyperbolic tangent function for training the network and a fitness
function is used for error minimization. FNNBBA based classification produces 92.61% accuracy for
training data and 89.95% for testing data.
Design of Dual Axis Solar Tracker System Based on Fuzzy Inference SystemsIJSCAI Journal
Electric power is a basic need in today’s life. Due to the extensive usage of power, there is a need to look
for an alternate clean energy source. Recently many researchers have focused on the solar energy as a
reliable alternative power source. Photovoltaic panels are used to collect sun radiation and convert it into
electrical energy. Most of the photovoltaic panels are deployed in a fixed position, they are inefficient as
they are fixed only at a specific angle. The efficiency of photovoltaic systems can be considerably increased
with an ability to change the panels angel according to the sun position. The main goal of such systems is
to make the sun radiation perpendicular to the photovoltaic panels as much as possible all the day times.
This paper presents a dual axis design for a fuzzy inference approach-based solar tracking system. The
system is modeled using Mamdani fuzzy logic model and the different combinations of ANFIS modeling.
Models are compared in terms of the correlation between the actual testing data output and their
corresponding forecasted output. The Mean Absolute Percent Error and Mean Percentage Error are used
to measure the models error size. In order to measure the effectiveness of the proposed models, we
compare the output power produced by a fixed photovoltaic panels with the output which would be
produced if the dual-axis panels are used. Results show that dual-axis solar tracker system will produce
22% more power than a fixed panels system.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
A Study on Graph Storage Database of NOSQL
1. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
DOI :10.5121/ijscai.2016.5104 33
A STUDY ON GRAPH STORAGE DATABASE OF
NOSQL
Smita Agrawal1
and Atul Patel2
1
CSE Department, Institute of Technology, Nirma University, Ahmedabad
2
Dean of CMPICA, CHARUSAT University, Changa
ABSTRACT
Big Data is used to store huge volume of both structured and unstructured data which is so large and is
hard to process using current / traditional database tools and software technologies. The goal of Big Data
Storage Management is to ensure a high level of data quality and availability for business intellect and big
data analytics applications. Graph database which is not most popular NoSQL database compare to
relational database yet but it is a most powerful NoSQL database which can handle large volume of data in
very efficient way. It is very difficult to manage large volume of data using traditional technology. Data
retrieval time may be more as per database size gets increase. As solution of that NoSQL databases are
available. This paper describe what is big data storage management, dimensions of big data, types of data,
what is structured and unstructured data, what is NoSQL database, types of NoSQL database, basic
structure of graph database, advantages, disadvantages and application area and comparison of various
graph database.
KEYWORDS
Big Data, Graph Database, NoSQL, Neo4j, graph
1. INTRODUCTION
Big Data is used to store huge volume of both structured and unstructured data which is so large
and is hard to process using current / traditional database tools and software technologies. Data
may be falls into textual and non – textual. Non – textual may include images, video, audio,
signals, emails, social media posts, large binary files etc.[1] The goal of Big Data Storage
Management is to guarantee a great level of data quality and availability for business intellect and
big data analytics applications.[2,11]
2. DIMENSION OF BIG DATA
Rate at which data Different Forms of Scale of Data Uncertainty of Data
changes over a time Data
Figure 1. Dimension of Big Data [10]
Figure-1 describes the dimension of Big data Velocity, Variety, Volume and Complexity which
are describe in details as follow:
2. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
34
2.1. High Data Velocity
Velocity means rate at which data changes over time. Velocity is the amount at which structure of
database gets change. In other word, if value of any property of given structure gets change
overall structure which is hosting those data and property will also get change. This thing gets
happen due to two reasons. First is, now days business are changing very quickly, and as per
business requirement changes, its data need will also be change. Second is, that data gaining is
often an untried affair, some property are brought as per need. The most valuable to business will
stay around and other will moves out.
2.2. Data Variety
Now a day data is exist in all type of data format. Data may be varies from structured,
unstructured and semi – structured.
2.3. Data Volume
Due to large data comes from different location, datasets will become awkward when warehoused
into relational database. Query Execution time will surge as database size grows hence more and
more database joins has to be performed which is really time consuming which is called join pain.
12 TB of data transferred over internet every day.
2.4. Data Complexity
Data which is stored at different places and managed by multiple systems, it is difficult to
manage those database hence complexity will surge.
3. TYPES OF DATA
Big Data may be varies from structured to unstructured database. In this section describe types of
data exist now a days.
3.1. About Structured Data
Structured data is statistics, which is usually comes into text files, displayed in columns and rows
format (Tabular format) which can effortlessly be tidy and administered by data withdrawal tools.
Examples: - Database, XML Data, Data warehouses, Enterprise system (CRM and ERP)
3.2. About Unstructured Data
Unstructured data, which is usually comes in binary data format that is branded, is that which has
no recognizable interior structure. Unstructured data is raw material, chaotic and companies store
it all. However all this unstructured info will be converted into structured info first, which is very
costly and time consuming process. Also not all types of unstructured data will be converted into
structured data model. For example, an email contains info such as the when it is sanded, subject
of email, and sender info but the gratified of the message is not so easily wrecked down and
categorized. Example: - images, video, audio, signals, emails, social media posts, large binary
files, RSS feed.
3. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
35
4. ABOUT NOSQL (NOT ONLY SQL) DATABASE
A Not only SQL (NoSQL) [9]database environment, makes non – relational and widely
distributed database system which allows quick, ad – hoc query association and investigation of
tremendously high – amount of data and dissimilar data types. Not only SQL databases are also
sometimes referred as a Cloud databases, non – relational databases, Big Data databases and an
innumerable of some other definition are also prepared in response to the pure volume of data
which is generated, warehoused and examined by users (data generated by user) and their
applications (data generated by machines). Not only SQL databases had become the alternative
to the relational database management system which provides scalability, obtainability and fault
detections and lenience which is key deciding factors. NoSQL is very flexible and schema less
database model, which is horizontal scalability, supports distributed architecture, and language
and interface usage that are “not only” SQL.
4.1. Document Store
At its most fundamental level the model is simply that we pile and fetch documents, just the
same as an electronic filing cabinet. Documents incline to comprise the usual property which is
having key-value pairs, but where values itself can be tilts, charts, or alike allowing for usual
orders in the document just as we are used to with arrangements like in JSON and XML.
Documents can be saved and fetched by their unique identification. Document store provides
and ID’s to be remember by application to which it is interested in. But in general document
store relies on indexes to access of document based on their property and attributed. For
example: - In e-commerce it is very useful to distinct different products types of that it can be
easily map to their respective seller. Example: - MongoDB [1] and CouchDB.
4.2. Key-Value Store
Key-value store database is friends of the document store family. Key – value store database is
intended for storage of data in a schema less way. In this database, all of its data consists on an
indexed key – value pair. Example: - Azure Table Storage, Riak, BerkeleyDB, Cassandra,
DyanmoDB.
4.3. Column Store / Big Table
The main four basic building blocks on the Big Table or Column Store are. o Column
o Super Column
o Column Family
o Super Column Family
Column consists of name and value pair which is similar to the columns of relational databases.
Number of columns are grouped into super column. Columns are warehoused into rows and
when rows contains column only it is known as column family. When rows contains super
columns it is known as super column family.
4.4. Graph Database
Graph database is a collection of vertexes and edges. Graph database is a set of nodes and
relationship between them.[4]
4. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
36
5. BASIC STRUCTURE OF GRAPH DATABASE
Here in this section figure – 2 describe the basic structure of Graph Database. In structure of
Graph database a graph is having records of nodes and its relationships. For each property graph
nodes and relationships having properties of it[7,12]. AllegroGraph[14], Neo4j[5],G-
store[15],DEX[16] ,VertexDB[17] and et al[6,8] are example of NoSQL Graph databse.
Figure 2. Basic Structure of Graph Database [5]
5.1. Nodes
The most important part of a graph database is nodes and their relationships. Nodes in graph
database is used to represent entity, sometimes it is also used for represent purpose as well.
Nodes, relationship and properties can be label with more than zero or more label.
5.2. Relationships
Relationships connect nodes. Relationships are always connected and directed (Always have a
starting node and an ending node). In addition, every relationship has a label. A relationship’s
label and its direction enhance semantic simplicity to the connections between nodes. We can
add properties to relationship which provides additional metadata for the graph algorithm, and
for constraining queries at run time.
5.3. Properties
Nodes and relationships can have properties which is key – value pair. Properties are key-value
pairs where the key and value both is a string, sometimes value are also numeric and other.[3]
Property values can be either a primeval or an array of one primitive type.
5.4. Path
The path is same as road through which we can Travers. In graph database path may contains
more than one nodes.
5. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
37
5.5. Traversal
Traversal means visiting the nodes of the graph database based on their relationships.
6. MORE ABOUT GRAPH DATABASE
This section describe advantages, disadvantages and application area for graph database.
6.1. Advantages of Graph Database
“Minutes to Milliseconds” performance.
Drastically-accelerated development cycles.
Extreme business responsiveness. Successful applications rarely stay still; changes in business
conditions, user behaviors and technical and operational infrastructures drive new requirements.
Enterprise ready. Data is important.
6.2. Disadvantages of Graph Database
Very easy to describe data inconsistency.
Not widely used in business environment yet compare to relational database management
system.
Can be conceptually difficult to understand at very first look.
6.3. Application/ Research area of Graph Database
Route finding (going from point A to point B)
Logistics
Authorization and Access Control
Network Impact Analysis
Network and IT Operations Management
Social Networking
7. COMPARISON OF VARIOUS GRAPH DATABASE
A comparison of databases is typically done by either using a set of common features or by
defining a general model. In this section is describe the comparison of data model provided by
each graph database based on it data structures (See Table-1). Here comparison of different
Graph Databases based on in Data Storing Features, structure of Graph Data and support of
ACID Properties described in Table 1. We consider some general features for data storing (see
Table 1) in which identifies the support for main memory, external memory and backend
storage as well as the implementation of indexed. Hence the support for external memory
storage is a main requirement. Also, indexes are needful to improve data retrieval operations
[13]. From the survey identified that Nested graphs is not supported by most of the graph
database and if graph database support Nested graph in that case it support hyper graph also.
6. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
38
Graph data structures strongly recommended that graph nodes should be labelled and
edges of graphs are labelled as well as directed. DEX[16] graph database partially support
ACID property.
Table 1 Comarision of existing graph database by its storing features, data structure and ACID Property
(In Table 1 √ indicate supports and ○ indicate partial support)
8. CONCLUSIONS
Here at the end we conclude that we have studied what is Big Database Storage
Management, Dimension of Big Data Storage Model, Types of Data, What is NoSQL
database, Types of NoSQL Databases, Basic structure of graph database, Advantages and
Disadvantages of graph database and application/research area and comparison of various
graph database. Comparison shows that Nested graphs is not supported by most of the
graph database and if graph database supports Nested graph in that case it support hyper
graph also.
REFERENCES
[1] Smita Agrawal, Jai Prakash Verma, Brijesh Mahidhariya, Nimesh Patel and Atul Patel , 2015.
“Survey On Mongodb: An Open-Source Document Database.International” Journal of Advanced
Research in Engineering and Technology (IJARET).Volume:6,Issue:12,Pages:1-11.
[2] Raj, Pethuru, et al. "High-Performance Big-Data Analytics."
[3] Castelltort, Arnaud, and Anne Laurent. "Representing history in graph-oriented nosql databases: A
versioning system." Digital Information Management (ICDIM), 2013 Eighth International Conference
on. IEEE, 2013.
[4] Bajpayee, Roshni, Sonali Priya Sinha, and Vinod Kumar. "Big Data: A Brief investigation on NoSQL
Databases." (2015)
[5] http://neo4j.com/
[6] http://en.wikipedia.org/wiki/Graph_database
[7] E-Book of Graph Database by Ian Robinson, Jim Webber & Emil Eifrem
[8] http://en.wikipedia.org/wiki/NoSQL
[9] http://www.3pillarglobal.com/insights/exploring-the-different-types-of-nosql-databases
[10] http://www.looiconsulting.com/home/enterprise-big-data/
[11] Kanchi, Sravanthi, et al. "Challenges and Solutions in Big Data Management--An Overview." Future
Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on. IEEE, 2015.
7. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.5, No.1, February 2016
39
[12] Buerli, Mike, and C. P. S. L. Obispo. "The current state of graph databases."Department of Computer
Science, Cal Poly San Luis Obispo, mbuerli@ calpoly. edu (2012): 1-7.
[13] Angles, Renzo. "A comparison of current graph database models." Data Engineering Workshops
(ICDEW), 2012 IEEE 28th International Conference on. IEEE, 2012.
[14] “AllegroGraph,” http://www.franz.com/agraph/allegrograph/.
[15] “G-Store,” http://g-store.sourceforge.net/.
[16] N. Mart´ınez-Bazan, V. Munt´es-Mulero, S. G´omez-Villamor, J. Nin, M.A. S´anchez-Mart´ınez, and
J.-L. Larriba-Pey, “DEX: High-Performance Exploration on Large Graphs for Information Retrieval,”
in Proceedings of the 16th Conference on Information and Knowledge Management (CIKM). ACM,
2007, pp. 573–582.
[17] “vertexdb,” http://www.dekorte.com/projects/opensource/vertexdb/.
AUTHORS
Smita Agrawal received Bachelor in Science (B.Sc. in Chemistry) degree from Gujarat
University, Gujarat, India in 2001 and Master’s Degree in Computer Applications
(M.C.A) from Gujarat Vidhyapith, Gujarat, India in 2004. She is pursuing PhD in
Computer Science and Applications from Charotar University of Science and
Technology (CHARUSAT) in the area of distributed processing. She is associated with
Computer Science and Engineering Department of Instutute of Technology - Nirma
University since 2009. Her research interests include Parallel Processing, Object
Oriented Analysis & Design and Programming Language(s).
Atul Patel received Bachelor in Science B.Sc (Electronics), M.C.A. Degree from Gujarat
University, India. M.Phil. (Computer Science) Degree from Madurai Kamraj University,
India. He has received his Ph.D degree from S. P. University. Now he is Professor and
Dean, Smt Chandaben Mohanbhai Patel Institute of Computer Applications – Charotar
University of Science and Technology (CHARUSAT) Changa, India. His main research
areas are wireless communication and Network Security.