The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
Improved Data Integrity Protection Regenerating-Coding Based Cloud StorageIJSRD
Â
In todayĂ¹ùâÂŹĂąâÂąs world a huge amount of data is loaded on the cloud storage. The protection of such type of data is main concern. It is somewhat difficult to protect such data against corruption, checking the integrity of data and also representation of failure data. Together with this it is also critical to implement fault tolerance among such type of data against corruptions. The private auditing for regenerating codes which is nothing but the existing system, developed to address such types of problems. The private auditing for regenerating codes can generate codes for such corrupted and incomplete data, but for this, data owners always have to stay online for checking completeness as well as integrity of data. In this paper, we are introducing the public auditing technique for regenerating code, based on cloud storage. The proxy is the main component in public auditing to regenerate failed authenticators in the absence of owner of the data. A public verifiable authenticator is also designed, which is generated by a several keys and can be regenerate using partial keys. We are also using pseudorandom function to preserve data privacy by randomizing the encode coefficient. Thus our technique can successfully regenerate the failed authenticators without data owner. Experiment implementation also indicates that our scheme is highly efficient and can be used to regenerate code in cloud based storage.
PUBLIC INTEGRITY AUDITING FOR SHARED DYNAMIC CLOUD DATA WITH GROUP USER REVO...Nexgen Technology
Â
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Security Check in Cloud Computing through Third Party Auditorijsrd.com
Â
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Improved Data Integrity Protection Regenerating-Coding Based Cloud StorageIJSRD
Â
In todayĂ¹ùâÂŹĂąâÂąs world a huge amount of data is loaded on the cloud storage. The protection of such type of data is main concern. It is somewhat difficult to protect such data against corruption, checking the integrity of data and also representation of failure data. Together with this it is also critical to implement fault tolerance among such type of data against corruptions. The private auditing for regenerating codes which is nothing but the existing system, developed to address such types of problems. The private auditing for regenerating codes can generate codes for such corrupted and incomplete data, but for this, data owners always have to stay online for checking completeness as well as integrity of data. In this paper, we are introducing the public auditing technique for regenerating code, based on cloud storage. The proxy is the main component in public auditing to regenerate failed authenticators in the absence of owner of the data. A public verifiable authenticator is also designed, which is generated by a several keys and can be regenerate using partial keys. We are also using pseudorandom function to preserve data privacy by randomizing the encode coefficient. Thus our technique can successfully regenerate the failed authenticators without data owner. Experiment implementation also indicates that our scheme is highly efficient and can be used to regenerate code in cloud based storage.
PUBLIC INTEGRITY AUDITING FOR SHARED DYNAMIC CLOUD DATA WITH GROUP USER REVO...Nexgen Technology
Â
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Security Check in Cloud Computing through Third Party Auditorijsrd.com
Â
In cloud computing, data owners crowd their data on cloud servers and users (data consumers) can access the data from cloud servers. Due to the data outsourcing, however, it requires an independent auditing service to check the data integrity in the cloud. Some existing remote integrity checking method scan only serve for static records data. Thus, cannot be used in the auditing service since the data in the cloud can be animatedly updated. Thus, an efficient and secure dynamic auditing protocol is required to convince data owners that the data are correctly stored in the cloud. In this paper, we first design an auditing framework for cloud storage systems for privacy-preserving auditing protocol. Then, we extend our auditing protocol to support the data dynamic operations, which is efficient to secure the random model.
Oruta privacy preserving public auditing for shared data in the cloudNexgen Technology
Â
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
International Journal of Engineering Research and Development (IJERD)IJERD Editor
Â
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.coÂŹm-Visit Our Website: www.finalyearprojects.org
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Â
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
âOne Ring to RUle Them Allâ (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data â while preserving identity privacy â remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Cloud computing is the internet based computing it is also known as "pay per use model"; we can pay only for the resources that are in the use. The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy measures are actively being researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from the sheer amount of virtualization and data distribution carried out in current clouds has also revealed an urgent need for research in cloud accountability, as has the shift in focus of customer concerns from server health and utilization to the integrity and safety of end-users' data. In this paper we purpose the method to store data provenance using Amazon S3 and simple DB.
EFFECTIVE MALWARE DETECTION APPROACH BASED ON DEEP LEARNING IN CYBER-PHYSICAL...ijcsit
Â
Cyber-physical Systems based on advanced networks interact with other networks through wireless
communication to enhance interoperability, dynamic mobility, and data supportability. The vast data is
managed through a cloud platform, vulnerable to cyber-attacks. It will threaten the customers in terms of
privacy and security as third-party users should authenticate the network. If it fails, it will create extensive
damage and threat to the established network and makes the hacker malfunction the network services
efficiently. This paper proposes a DL-based CPS approach to identify and mitigate the malware cyberphysical system attack of Denial of Service (DoS) and Distributed Denial of Service (DDoS) as it ensures
adequate decision support. At the same time, the trusted user nodes are connected to the network. It helps
to improve the privacy and authentication of the network by improving the data accuracy and Quality of
Service (QoS) in the network. Here the analysis is determined on the proposed system to improve the
network reliability and security compared to some of the existing SVM-based and Apriori-based detection
approaches.
Cyber-physical Systems based on advanced networks interact with other networks through wireless
communication to enhance interoperability, dynamic mobility, and data supportability. The vast data is
managed through a cloud platform, vulnerable to cyber-attacks. It will threaten the customers in terms of
privacy and security as third-party users should authenticate the network. If it fails, it will create extensive
damage and threat to the established network and makes the hacker malfunction the network services
efficiently. This paper proposes a DL-based CPS approach to identify and mitigate the malware cyber-
physical system attack of Denial of Service (DoS) and Distributed Denial of Service (DDoS) as it ensures
adequate decision support. At the same time, the trusted user nodes are connected to the network. It helps
to improve the privacy and authentication of the network by improving the data accuracy and Quality of
Service (QoS) in the network. Here the analysis is determined on the proposed system to improve the
network reliability and security compared to some of the existing SVM-based and Apriori-based detection
approaches.
Oruta privacy preserving public auditing for shared data in the cloudNexgen Technology
Â
Ecruitment Solutions (ECS) is one of the leading Delhi based Software Development & HR Consulting Firm, which is assessed at the level of ISO 9001:2008 standard. ECS offers an awesome project and product based solutions to many customers around the globe.
In addition, ECS has also widened its wings by the way consummating academic projects especially for the final year professional degree students in India. ECS consist of a technical team that has solved many IEEE papers and delivered world-class solutions .
International Journal of Engineering Research and Development (IJERD)IJERD Editor
Â
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.coÂŹm-Visit Our Website: www.finalyearprojects.org
Centralized Data Verification Scheme for Encrypted Cloud Data ServicesEditor IJMTER
Â
Cloud environment supports data sharing between multiple users. Data integrity is violated
due to hardware / software failures and human errors. Data owners and public verifiers are involved to
efficiently audit cloud data integrity without retrieving the entire data from the cloud server. File and
block signatures are used in the integrity verification process.
âOne Ring to RUle Them Allâ (Oruta) scheme is used for privacy-preserving public auditing process. In
oruta homomorphic authenticators are constructed using Ring Signatures. Ring signatures are used to
compute verification metadata needed to audit the correctness of shared data. The identity of the signer
on each block in shared data is kept private from public verifiers. Homomorphic authenticable ring
signature (HARS) scheme is applied to provide identity privacy with blockless verification. Batch
auditing mechanism supports to perform multiple auditing tasks simultaneously. Oruta is compatible
with random masking to preserve data privacy from public verifiers. Dynamic data management process
is handled with index hash tables. Traceability is not supported in oruta scheme. Data dynamism
sequence is not managed by the system. The system obtains high computational overhead
The proposed system is designed to perform public data verification with privacy. Traceability features
are provided with identity privacy. Group manager or data owner can be allowed to reveal the identity of
the signer based on verification metadata. Data version management mechanism is integrated with the
system.
With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data â while preserving identity privacy â remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to publicly verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.
Cloud computing is the internet based computing it is also known as "pay per use model"; we can pay only for the resources that are in the use. The key barrier to widespread uptake of cloud computing is the lack of trust in clouds by potential customers. While preventive controls for security and privacy measures are actively being researched, there is still little focus on detective controls related to cloud accountability and auditability. The complexity resulting from the sheer amount of virtualization and data distribution carried out in current clouds has also revealed an urgent need for research in cloud accountability, as has the shift in focus of customer concerns from server health and utilization to the integrity and safety of end-users' data. In this paper we purpose the method to store data provenance using Amazon S3 and simple DB.
EFFECTIVE MALWARE DETECTION APPROACH BASED ON DEEP LEARNING IN CYBER-PHYSICAL...ijcsit
Â
Cyber-physical Systems based on advanced networks interact with other networks through wireless
communication to enhance interoperability, dynamic mobility, and data supportability. The vast data is
managed through a cloud platform, vulnerable to cyber-attacks. It will threaten the customers in terms of
privacy and security as third-party users should authenticate the network. If it fails, it will create extensive
damage and threat to the established network and makes the hacker malfunction the network services
efficiently. This paper proposes a DL-based CPS approach to identify and mitigate the malware cyberphysical system attack of Denial of Service (DoS) and Distributed Denial of Service (DDoS) as it ensures
adequate decision support. At the same time, the trusted user nodes are connected to the network. It helps
to improve the privacy and authentication of the network by improving the data accuracy and Quality of
Service (QoS) in the network. Here the analysis is determined on the proposed system to improve the
network reliability and security compared to some of the existing SVM-based and Apriori-based detection
approaches.
Cyber-physical Systems based on advanced networks interact with other networks through wireless
communication to enhance interoperability, dynamic mobility, and data supportability. The vast data is
managed through a cloud platform, vulnerable to cyber-attacks. It will threaten the customers in terms of
privacy and security as third-party users should authenticate the network. If it fails, it will create extensive
damage and threat to the established network and makes the hacker malfunction the network services
efficiently. This paper proposes a DL-based CPS approach to identify and mitigate the malware cyber-
physical system attack of Denial of Service (DoS) and Distributed Denial of Service (DDoS) as it ensures
adequate decision support. At the same time, the trusted user nodes are connected to the network. It helps
to improve the privacy and authentication of the network by improving the data accuracy and Quality of
Service (QoS) in the network. Here the analysis is determined on the proposed system to improve the
network reliability and security compared to some of the existing SVM-based and Apriori-based detection
approaches.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
Â
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure â coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
DISTRIBUTED SCHEME TO AUTHENTICATE DATA STORAGE SECURITY IN CLOUD COMPUTINGijcsit
Â
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces
database and application software to the large data centres, where the management of services and data
may not be predictable, where as the conventional solutions, for IT services are under proper logical,
physical and personal controls. This aspect attribute, however comprises different security challenges
which have not been well understood. It concentrates on cloud data storage security which has always been
an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and
efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent
features. Homomorphic token is used for distributed verification of erasure â coded data. By using this
scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and
secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to
traditional solutions, where the IT services are under proper physical, logical and personnel controls,
cloud computing moves the application software and databases to the large data centres, where the data
management and services may not be absolutely truthful. This effective security and performance analysis
describes that the proposed scheme is extremely flexible against malicious data modification, convoluted
failures and server clouding attacks.
Cloud Computing is the revolution in current generation IT enterprise. Cloud computing displaces database and application software to the large data centres, where the management of services and data may not be predictable, where as the conventional solutions, for IT services are under proper logical, physical and personal controls. This aspect attribute, however comprises different security challenges which have not been well understood. It concentrates on cloud data storage security which has always been an important aspect of quality of service (QOS). In this paper, we designed and simulated an adaptable and efficient scheme to guarantee the correctness of user data stored in the cloud and also with some prominent features. Homomorphic token is used for distributed verification of erasure â coded data. By using this scheme, we can identify misbehaving servers. In spite of past works, our scheme supports effective and secure dynamic operations on data blocks such as data insertion, deletion and modification. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the data management and services may not be absolutely truthful. This effective security and performance analysis describes that the proposed scheme is extremely flexible against malicious data modification, convoluted failures and server clouding attacks.
Evasion Streamline Intruders Using Graph Based Attacker model Analysis and Co...Editor IJCATR
Â
Network Intrusion detection and Countermeasure Election in virtual network systems (NICE) are used to establish a
defense-in-depth intrusion detection framework. For better attack detection, NICE incorporates attack graph analytical procedures into
the intrusion detection processes. We must note that the design of NICE does not intend to improve any of the existing intrusion
detection algorithms; indeed, NICE employs a reconfigurable virtual networking approach to detect and counter the attempts to
compromise VMs, thus preventing zombie VMs. NICE includes two main phases: deploy a lightweight mirroring-based network
intrusion detection agent (NICE-A) on each cloud server to capture and analyze cloud traffic. A NICE-A periodically scans the virtual
system vulnerabilities within a cloud server to establish Scenario Attack Graph (SAGs), and then based on the severity of identified
vulnerability toward the collaborative attack goals, NICE will decide whether or not to put a VM in network inspection state. Once a
VM enters inspection state, Deep Packet Inspection (DPI) is applied, and/or virtual network reconfigurations can be deployed to the
inspecting VM to make the potential attack behaviors prominent.
DDOS ATTACKS DETECTION USING DYNAMIC ENTROPY INSOFTWARE-DEFINED NETWORK PRACT...IJCNCJournal
Â
Software-Defined Network (SDN) is an innovative network architecture with the goal of providing the
flexibility and simplicity in network operation and management through a centralized controller. These
features help SDN to easily adapt to the expansion of network requirements, but it is also a weakness when
it comes to security. With centralized architecture, SDN is vulnerable to cyber-attacks, especially
Distributed Denial of Service (DDoS) attack. DDoS is a popular attack type which consumes all network
resources and causes congestion in the entire network. In this research, we will introduce a DDoS
detection model based on the statistical method with a dynamic threshold value that changes over time.
Along with the simulation result, we build a practical SDN model to apply our method, the results show
that our method can detect DDoS attacks rapidly with high accuracy.
DDoS Attacks Detection using Dynamic Entropy in Software-Defined Network Prac...IJCNCJournal
Â
Software-Defined Network (SDN) is an innovative network architecture with the goal of providing the flexibility and simplicity in network operation and management through a centralized controller. These features help SDN to easily adapt tothe expansion of networkrequirements, but it is also a weakness when it comes to security. With centralized architecture, SDN is vulnerable to cyber-attacks, especially Distributed Denial of Service (DDoS) attack. DDoS is a popular attack type which consumes all network resources and causes congestion in the entire network. In this research, we will introduce a DDoS detection model based on the statistical method with a dynamic threshold value that changes over time. Along with the simulation result, we build a practical SDN model to apply our method, the results show that our method can detectD DoS attacks rapidly with high accuracy.
An Algorithm to synchronize the local database with cloud DatabaseAM Publications
Â
Since the cloud computing [1] platform is widely accepted by the industry, variety of applications are designed targeting to a cloud platform. Database as a Service (DaaS) is one of the powerful platform of cloud computing. There are many research issues in DaaS platform and one among them is the data synchronization issue. There are many approaches suggested in the literature to synchronise a local database by being in cloud environment. Unfortunately, very few work only available in the literature to synchronise a cloud database by being in the local database. The aim of this paper is to provide an algorithm to solve the problem of data synchronization from local database to cloud database.
Excellent Manner of Using Secure way of data storage in cloud computingEditor IJMTER
Â
The major challenging issue in Cloud computing is Security. Providing Security is big issue
towards protecting data from third person as well as in Internet. This mainly deals the Security how it is
provided. Various type of services are there to protect our data and Various Services are available in Cloud
Computing to Utilize effective manner as Software as a Service (SaaS), Platform as a Service (PaaS),
Hardware as a Service (HaaS). Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over Internet network. Cloud Computing moves the Application
software and databases to the large data centres, where the administration of the data and services may not
be fully trustworthy that is in third party here the party has to get certified and authorized. Since Cloud
Computing share distributed resources via network in the open environment thus it makes new security
risks towards the correctness of the data in cloud. I propose in this paper flexibility of data storage
mechanism in the distributed environment by using the homomorphism token generation. In the proposed
system, users need to allow auditing the cloud storage with lightweight communication. While using
Encryption and Decryption methods it is very burden for a single processor. Than the processing
Capabilities can we utilize from Cloud Computing.
APPLICATION-LAYER DDOS DETECTION BASED ON A ONE-CLASS SUPPORT VECTOR MACHINEIJNSA Journal
Â
Application-layer Distributed Denial-of-Service (DDoS) attack takes advantage of the complexity and
diversity of network protocols and services. This kind of attacks is more difficult to prevent than other kinds
of DDoS attacks. This paper introduces a novel detection mechanism for application-layer DDoS attack
based on a One-Class Support Vector Machine (OC-SVM). Support vector machine (SVM) is a relatively
new machine learning technique based on statistics. OC-SVM is a special variant of the SVM and since
only the normal data is required for training, it is effective for detection of application-layer DDoS attack.
In this detection strategy, we first extract 7 features from normal usersâ sessions. Then, we build normal
usersâ browsing models by using OC-SVM. Finally, we use these models to detect application-layer DDoS
attacks. Numerical results based on simulation experiments demonstrate the efficacy of our detection
method.
A DDS-Based Scalable and Reconfigurable Framework for Cyber-Physical Systemsijseajournal
Â
Cyber-Physical Systems (CPSs) involve the interconnection of heterogeneous computing devices which are
closely integrated with the physical processes under control. Often, these systems are resource-constrained
and require specific features such as the ability to adapt in a timeliness and efficient fashion to dynamic
environments. Also, they must support fault tolerance and avoid single points of failure. This paper
describes a scalable framework for CPSs based on the OMG DDS standard. The proposed solution allows
reconfiguring this kind of systems at run-time and managing efficiently their resources.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
Â
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
An intelligent system to detect slow denial of service attacks in software-de...IJECEIAES
Â
Slow denial of service attack (DoS) is a tricky issue in software-defined network (SDN) as it uses less bandwidth to attack a server. In this paper, a slow-rate DoS attack called Slowloris is detected and mitigated on Apache2 and Nginx servers using a methodology called an intelligent system for slow DoS detection using machine learning (ISSDM) in SDN. Data generation module of ISSDM generates dataset with response time, the number of connections, timeout, and pattern match as features. Data are generated in a real environment using Apache2, Nginx server, Zodiac FX OpenFlow switch and Ryu controller. Monte Carlo simulation is used to estimate threshold values for attack classification. Further, ISSDM performs header inspection using regular expressions to mark flows as legitimate or attacked during data generation. The proposed feature selection module of ISSDM, called blended statistical and information gain (BSIG), selects those features that contribute best to classification. These features are used for classification by various machine learning and deep learning models. Results are compared with feature selection methods like Chi-square, T-test, and information gain.
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
Â
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
SUCCESS-DRIVING BUSINESS MODEL CHARACTERISTICS OF IAAS AND PAAS PROVIDERSneirew J
Â
ABSTRACT Market analyses show that some cloud providers are significantly more successful than others. The research on the success-driving business model characteristics of cloud providers and thus, the reasons for this performance discrepancy is, however, still limited. Whereas cloud business models have mostly been examined comprehensively, independently from the distinctly different cloud ecosystem roles, this paper takes a perspective shift from an overall towards a selective, role-specific and thereby ecosystemic perspective on cloud business models. The goal of this paper is specifically to identify the success-driving business model characteristics of the so far widely neglected cloud ecosystemâs core roles, IaaS and PaaS provider, by conducting an exploratory multiple-case study. 21 expert interviews with representatives from 17 cloud providers serve as central data collection instrument. The result is a catalogue of generic as well as cloud-specific, subdivided into role-overarching and role-specific, business model characteristics. This catalogue supports cloud providers in the initial design, comparison and revision of their business models. Researchers obtain a promising starting and reference point for future analysis of business models of various cloud ecosystem roles.
Strategic Business Challenges in Cloud Systemsneirew J
Â
For the past few years, the evolution of cloud computing has been potentially becoming one of the major
advances in the history of computing. But is cloud computing the saviour of business? Does it signal the
demise of the corporate IT functionality entirely? However, if cloud computing has to achieve its potential,
there is a need to have a clear understanding of various issues involved, both from the perspectives of the
providers and the consumers related to the technology, management and business aspects. Objective of this
research is to explore the strategic business, management and technical challenges existing in cloud
systems. It is believed that adopting a methodology and suggesting a corresponding architectural
framework would serve as a potential comprehensive conceptual tool, which shows path for mitigating
challenges and hence effort are put in bringing in by mentioning a suitable methodology and its brief
description. It concludes that International Business Machine Common Cloud Management Platform is one
way to realize the combined features of various models such as Hub & Spoke Model as a quality of
Governance model; Gen-Spec Research Methodology design for semantic and quality research studies into
one in the form of Reference Architecture. However in order to realize the full potential of the CustomerRespond-Adapt-Sense-Provider
(conceptual) methodology for dealing with semantics, it is important to
consider Internet of Things Architecture Reference Model where in the resources are translated into
Services.
Laypeople's and Experts' Risk Perception of Cloud Computing Services neirew J
Â
Cloud computing is revolutionising the way software services are procured and used by Government
organizations and SMEs. Quantitative risk assessment of Cloud services is complex and undermined by
specific security concerns regarding data confidentiality, integrity and availability. This study explores how
the gap between the quantitative risk assessment and the perception of the risk can produce a bias in the
decision-making process about Cloud computing adoption.
The risk perception of experts in Cloud computing (N=37) and laypeople (N=81) about ten Cloud
computing services was investigated using the psychometric paradigm. Results suggest that the risk
perception of Cloud services can be represented by two components, called âdread riskâ and âunknown
riskâ, which may explain up to 46% of the variance. Other factors influencing the risk perception were
âperceived benefitsâ, âtrust in regulatory authoritiesâ and âtechnology attitudeâ.
This study suggests some implications that could support Government and non-Government organizations
in their strategies for Cloud computing adoption.
Factors Influencing Risk Acceptance of Cloud Computing Services in the UK Gov...neirew J
Â
Cloud Computing services are increasingly being made available by the UK Government through the
Government digital marketplace to reduce costs and improve IT efficiency; however, little is known about
factors influencing the decision making process to adopt cloud services within the UK Government. This
research aims to develop a theoretical framework to understand risk perception and risk acceptance of
cloud computing services.
Studyâs subjects (N=24) were recruited from three UK Government organizations to attend a semi
structured interview. Transcribed texts were analyzed using the approach termed interpretive
phenomenological analysis. Results showed that the most important factors influencing risk acceptance of
cloud services are: perceived benefits and opportunities, organizationâs risk culture and perceived risks.
We focused on perceived risks and perceived security concerns. Based on these results, we suggest a
number of implications for risk managers, policy makers and cloud service providers
A Cloud Security Approach for Data at Rest Using FPE neirew J
Â
In a cloud scenario, biggest concern is around security of the data. âBoth data in transit and at rest must
be secureâ is a primary goal of any organization. Data in transit can be made secure using TLS level
security like SSL certificates. But data at rest is not quite secure, as database servers in public cloud
domain are more prone to vulnerabilities. Not all cloud providers give out of box encryption with their
offerings. Also implementing traditional encryption techniques will cause lot of changes in application as
well as at database level. This paper provides efficient approach to encrypt data using Format Preserving
Encryption technique. FPE focuses mainly on encrypting data without changing format so that itâs easy to
develop and migrate legacy application to cloud. It is capable of performing format preserving encryption
on numeric, string and the combination of both. This literature states various features and advantages of
same.
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications neirew J
Â
Management of errors in multi-tenant cloud based applications remains a challenging problem. This
problem is compounded due to (i) multiple versions of application serving different clients, (ii) agile nature
in which the applications are released to the clients, and (iii) variations in specific usage patterns of each
client. We propose a framework for isolating and managing errors in such applications. The proposed
framework is evaluated with two different popular cloud based applications and empirical results are
presented.
Locality Sim : Cloud Simulator with Data Localityneirew J
Â
Cloud Computing (CC) is a model for enabling on-demand access to a shared pool of configurable
computing resources. Testing and evaluating the performance of the cloud environment for allocating,
provisioning, scheduling, and data allocation policy have great attention to be achieved. Therefore, using
cloud simulator would save time and money, and provide a flexible environment to evaluate new research
work. Unfortunately, the current simulators (e.g., CloudSim, NetworkCloudSim, GreenCloud, etc..) deal
with the data as for size only without any consideration about the data allocation policy and locality. On
the other hand, the NetworkCloudSim simulator is considered one of the most common used simulators
because it includes different modules which support needed functions to a simulated cloud environment,
and it could be extended to include new extra modules. According to work in this paper, the
NetworkCloudSim simulator has been extended and modified to support data locality. The modified
simulator is called LocalitySim. The accuracy of the proposed LocalitySim simulator has been proved by
building a mathematical model. Also, the proposed simulator has been used to test the performance of the
three-tire data center as a case study with considering the data locality feature.
Benefits and Challenges of the Adoption of Cloud Computing in Businessneirew J
Â
The loss of business and downturn of economics almost occur every day. Thus technology is needed in
every organization. Cloud computing has played a major role in solving the inefficiencies problem in
organizations and increase the growth of business thus help the organizations to stay competitive. It is
required to improve and automate the traditional ways of doing business. Cloud computing has been
considered as an innovative way to improve business. Overall, cloud computing enables the organizations
to manage their business efficiently. Unnecessary procedural, administrative, hardware and software costs
in organizations expenses are avoided using cloud computing. Although cloud computing can provide
advantages but it does not mean that there are no drawbacks. Security has become the major concern in
cloud and cloud attacks too. Business organizations need to be alert against the attacks to their cloud
storage. Benefits and drawbacks of cloud computing in business will be explored in this paper. Some
solutions also provided in this paper to overcome the drawbacks. The method has been used is secondary
research, that is collecting data from published journal papers and conference papers.
A Survey on Resource Allocation in Cloud Computingneirew J
Â
Cloud computing is an on-demand service resource which includes applications to data centers on a
pay-per-use basis. In order to allocate these resources properly and satisfy usersâ demands, an efficient
and flexible resource allocation mechanism is needed. Due to increasing user demand, the resource
allocating process has become more challenging and difficult. One of the main focuses of research
scholars is how to develop optimal solutions for this process. In this paper, a literature review on proposed
dynamic resource allocation techniques is introduced.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
Â
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
Data Distribution Handling on Cloud for Deployment of Big Dataneirew J
Â
Cloud computing is a new emerging model in the field of computer science. For varying workload Cloud
computing presents a large scale on demand infrastructure. The primary usage of clouds in practice is to
process massive amounts of data. Processing large datasets has become crucial in research and business
environments. The big challenges associated with processing large datasets is the vast infrastructure
required. Cloud computing provides vast infrastructure to store and process Big data. Vms can be
provisioned on demand in cloud to process the data by forming cluster of Vms . Map Reduce paradigm can
be used to process data wherein the mapper assign part of task to particular Vms in cluster and reducer
combines individual output from each Vms to produce final result. we have proposed an algorithm to
reduce the overall data distribution and processing time. We tested our solution in Cloud Analyst
Simulation environment wherein, we found that our proposed algorithm significantly reduces the overall
data processing time in cloud.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous
grows in the number of educational institutions all over the world who have either adopted or are
considering migrating to cloud computing. However, there are many concerns and reservations about
adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been
proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational
Institutions. This paper presents the adjustment and implementation of private-based cloud solution for
multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
Implementation of the Open Source Virtualization Technologies in Cloud Computingneirew J
Â
The âVirtualization and Cloud Computingâ is a recent buzzword in the digital world. Behind this fancy
poetic phrase there lies a true picture of future computing for both in technical and social perspective.
Though the âVirtualization and Cloud Computing are recent but the idea of centralizing computation and
storage in distributed data centres maintained by any third party companies is not new but it came in way
back in 1990s along with distributed computing approaches like grid computing, Clustering and Network
load Balancing. Cloud computing provide IT as a service to the users on-demand basis. This service has
greater flexibility, availability, reliability and scalability with utility computing model. This new concept of
computing has an immense potential in it to be used in the field of e-governance and in the overall IT
development perspective in developing countries like Bangladesh.
A Broker-based Framework for Integrated SLA-Aware SaaS Provisioning neirew J
Â
In the service landscape, the issues of service selection, negotiation of Service Level Agreements (SLA), and
SLA-compliance monitoring have typically been used in separate and disparate ways, which affect the
quality of the services that consumers obtain from their providers. In this work, we propose a broker-based
framework to deal with these concerns in an integrated mannerfor Software as a Service (SaaS)
provisioning. The SaaS Broker selects a suitable SaaS provider on behalf of the service consumer by using
a utility-driven selection algorithm that ranks the QoS offerings of potential SaaS providers. Then, it
negotiates the SLA terms with that provider based on the quality requirements of the service consumer. The
monitoring infrastructure observes SLA-compliance during service delivery by using measurements
obtained from third-party monitoring services. We also define a utility-based bargaining decision model
that allows the service consumer to express her sensitivity for each of the negotiated quality attributes and
to evaluate the SaaS provider offer in each round of negotiation. A use-case with few quality attributes and
their respective utility functions illustrates the approach.
Comparative Study of Various Platform as a Service Frameworks neirew J
Â
Cloud computing is an emerging paradigm with three basic service models such as Software as a Service
(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). This paper focuses on
different kinds of PaaS frameworks. PaaS model provides choice of cloud, developer framework and
application service. In this paper, detailed study of four open PaaS frameworks like AppScale, Cloud
Foundry, Cloudify, and OpenShift are explained with the architectural components. We also explained
more PaaS packages like Stratos, mOSAIC, BlueMix, Heroku, Amazon Elastic Beanstalk, Microsoft Azure,
Google App Engine and Stakato briefly. In this paper we present the comparative study of PaaS
frameworks.
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...neirew J
Â
Cloud collaboration is an emerging technology which enables sharing of computer files using cloud
computing. Here the cloud resources are assembled and cloud services are provided using these resources.
Cloud collaboration technologies are allowing users to share documents. Resource allocation in the cloud
is challenging because resources offer different Quality of Service (QoS) and services running on these
resources are risky for user demands. We propose a solution for resource allocation based on multi
attribute QoS Scoring considering parameters such as distance to the resource from user site, reputation of
the resource, task completion time, task completion ratio, and load at the resource. The proposed algorithm
referred to as Multi Attribute QoS scoring (MAQS) uses Neuro Fuzzy system. We have also included a
speculative manager to handle fault tolerance. In this paper it is shown that the proposed algorithm
perform better than others including power trust reputation based algorithms and harmony method which
use single attribute to compute the reputation score of each resource allocated.
A Proposed Model for Improving Performance and Reducing Costs of IT Through C...neirew J
Â
Information technologies are affecting the big business enterprises of todays from data processing and
transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities
and towards new competitive advantage, service must be enough to match the recent trends of IT such as
cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing
offers an alternative to adaptable with technology model current , creating reducing cost (Fixed costs and
ongoing), the proliferation of high speed Internet connections through Rent, not acquisitions, cheaper
powerful computing technology and effective performance. The public and private clouds are characterized
by flexibility, operational efficiency that reduces costs improve performance. Also cloud computing
generates business creativity and innovation resulted from collaborative ideas of users; presents cloud
infrastructure and services; paving new markets; offering security in public and private clouds; and
providing environmental impact regarding utilizing green energy technology. In this paper, the main
concentrate the cloud computing.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Attribute Based Access Control (ABAC) for EHR in Fog Computing Environmentneirew J
Â
Cisco recently proposed a new computing environment called fog computing to support latency-sensitive
and real time applications. It is a connection of billions of devices nearest to the network edge. This
computing will be appropriate for Electronic Medical Record (EMR) systems that are latency-sensitive in
nature. In this paper, we aim to achieve two goals: (1) Managing and sharing Electronic Health Records
(EHRs) between multiple fog nodes and cloud, (2) Focusing on security of EHR, which contains highly
confidential information. So, we will secure access into EHR on Fog computing without effecting the
performance of fog nodes. We will cater different users based on their attributes and thus providing
Attribute Based Access Control ABAC into the EHR in fog to prevent unauthorized access. We focus on
reducing the storing and processes in fog nodes to support low capabilities of storage and computing of fog
nodes and improve its performance.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Â
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
Â
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. Whatâs changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Â
Clients donât know what they donât know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clientsâ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Â
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Â
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Â
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as âpredictable inferenceâ.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Â
Are you looking to streamline your workflows and boost your projectsâ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, youâre in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part âEssentials of Automationâ series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Hereâs what youâll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
Weâll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Donât miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Â
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Â
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
Â
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more âmechanicalâ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
Â
Intrusion Detection and Marking Transactions in a Cloud of Databases Environment
1. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
DOI: 10.5121/ijccsa.2016.6502 13
INTRUSION DETECTION AND MARKING
TRANSACTIONS IN A CLOUD OF DATABASES
ENVIRONMENT
Syrine Chatti and Habib Ounelli
Department of Computer Sciences, Faculty of Sciences, El Manar, Tunis
ABSTRACT
The cloud computing is a paradigm for large scale distributed computing that includes several existing
technologies. A database management is a collection of programs that enables you to store, modify and
extract information from a database. Now, the database has moved to cloud computing, but it introduces at
the same time a set of threats that target a cloud of database system. The unification of transaction based
application in these environments present also a set of vulnerabilities and threats that target a cloud of
database environment. In this context, we propose an intrusion detection and marking transactions for a
cloud of database environment.
KEYWORDS
Database system, cloud computing, intrusion detection, marking transaction.
1. INTRODUCTION
The unfolding of a cloud of database system has risen over the world cause of the need of
distributed storage and the great volume of data handled by business applications. The unification
of transaction based application in these environments present a set of threats that can target a
cloud of database environment.
For example, the execution of the malicious sub transactions may be delayed by rendering the
wireless nodes unavailable or a fake commit or rollback may be issued by the intruder when the
system is compromised. As a consequence to these malicious actions critical data may be lost.
Available intrusion detection do not support the particularities of these environments (database,
cloud and transaction systems) in their processing either to detect or mark attacks.
To overcome such shortcomings, we propose an appropriate intrusion detection and marking
transactions in a cloud of database system. This approach includes the definition of the
communication model that is defined in order to ensure the exchange of a set of notification
messages. This scheme is based of marking the details of transactions by maintaining a set of
valuable information regarding the dependency relationship between these transactions and their
security status.
The remaining part of the paper is organized as follow. The section 2, we make a survey of the
existing approaches for intrusion detection of database systems. In the next section, we will give
an overview of the system architecture. In the section 4, the proposed intrusion detection and
marking scheme is proposed. In section 5, a simulation results is proposed in order to depict both
2. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
14
behavior and efficiency about the suggested scheme following system compromission. Section 6
concludes the paper.
2. RELATED WORK
Several techniques have been proposed to handle different aspects of intrusion detection in a
database environment
In [2], the authors present the misuse detection system that uses auditing in order to have sources
describing typical behaviour of the users using the database system. The drawback of this
approach that is detects only known attacks and is unable to detect new attacks.
In [8], Zhong presents the database intrusion detection based on user query itemsets mining with
item constraints. In this approach the intrusion detection uses a profile of normal user stored in
database that is appropriate to find unknown attack in the database. The main problem in this
approach that the delay of execution is long in addition false negative alarm increased.
In [7], a data mining approach for database intrusion detection is presented. A dependency
relationships between data items is determined. If one item is modified then another item refer
with it is also modified. The authors determine dependency among data items where data
dependency refers to the access correlations among data items. The association rules generate
these data dependency. Therefore, transactions that do not follow any of mined data dependency
rules are marked as malicious one. The problem of this approach that they process all the
attributes at the same significance and equal level which is not always true in real applications.
In database intrusion detection using weighted sequence mining [6], they consider that some
attributes are more sensitive to malicious modifications compared to others. Any transaction that
does not follow these dependency rules are identified as malicious. The most important problem
in this approach is the proper support is identificated and the values are confident.
In intrusion detection of Malicious Activity in RBAC administered databases [1], the authors
have presented a role based approach to detect malicious behaviour in role access control. A
classifier is used in order to build role profiles and detect abnormal behaviour. The classifier
estimates roles for given user than it compares it with the actual role of user. If itâs different an
alarm is triggered. This approach is good for databases which use role based access control. In
addition, it covers insider threat scenario. But, the dependency relationship is not detected in this
approach. So, some attacks in the database is not detected.
3. SECTION OVERVIEW
In this section, we will introduce the architecture of the proposed system that includes the cloud
database controller (CDC), the participant nodes and cloud database storage (CDS) as presented
in Figure 1.
3. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
15
Figure 1 The architecture of a cloud of databases system
ï· CDC: Itâs the main component handling the CDSâs interactions. A common pool is
defined to manage CDSs and CDS blocks are allocated to host system from it. The CDC
manages data handled by CDSs and it manages the transactions running in the system.
The CDC operates a set of structures to serve incoming requests. It serves and presents a
set of structures:
o The first is a mapping table that hold the transactions and its originator as cited in
[3].
o The second is a table that contains for each input the adequate data center.
o For each data center, a structure that provides a mapping table that contain the
participant node and its suitable CDS.
ï· Participant Node: The participant nodes are designed for transaction services, for data
centers that are located in different geogical locations and corporate data center as well.
This link is required for easy and complete access to the database on cloud services.
ï· CDS: It produces a detailed report on each step used to access data and allowing you to
implement precisely improved performance.
3.1. System overview
When receiving the transaction from the user, the CDC divides the transaction into sub
transactions, checks its mapping table and then decides which data centers will be the best for
processing of the sub transactions.
4. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
16
After identifying the adequate data centers, the sub transactions will be transferred to that
specified one. When receiving the sub transactions, the participant node processes it and responds
the CDC by issuing a request for a local commit or rollback.
The cloud of database system enables the possibility of storing multiple copies of data at various
physical locations throughout the system. Data replication accross multiple sites is desirable for a
variety of reasons. To provide disaster tolerance, copies of data stored at different physical
location should be available. When a copy becole unvailable because of a default of CDS,
breakdown of the network, the replica located at another site can allow access to data. In such
architecture, the replica provided benefit until failure.
For the processing of data replication, the CDC have a mapping table that holds for each entry the
transaction and its DCS. A strategy of replication is imposed for this purpose, the CDC should
replicate the requested data in the destination. Then, it stores copies in different places. Therefore,
in its mapping table, the CDC has the data transaction, the CDS and location of replicated data.
3.2. The attacks that target components in our environment
In this section, we will present the attacks targetting the components in a cloud of database
system.
1. Against CDC: In order to gain access to the service, a SQL injection attack is realized by
the intruder against the authentication server to gather required information during the
authentication process. When this step is performed, the intruder utilizes these parameters
to subscribes for a session. The CDC authorizes the malicious user to participate in the
session. After performing the SDL injection attack, he tests the case, does a SELECT and
stocks the user name and passwords. This attack allows showing password and identifiers
associated with it. Once a hacker has obtained the password, he modifies an existing SQL
commands to expose hidden data so that it can have an access to secret information. After
gaining access to the CDC as a super user by using access parameters obtained after the
SQL injection attack, the attacker can gain the integrity of files. In fact, when a
transaction is initialized, the attacker will try to change the mapping table of the CDC that
contains for each entry the transaction and its originator, the server and its associated
CDS. In addition to the mapping table, the attacker can change the set of associated
blocks in the defined CDS. Therefore, he can attack the integrity of files and modifies the
association between the file and the appropriate CDS.
2. Against the participant nodes: The participant node that is responsible for the processing
of sub transactions has been compromised because an intruder may define an attack
scenario including a transaction and its sub transactions. The execution of the malicious
sub transactions may be performed by the intruder by rendering the participant nodes
unavailable in order to defeat available intrusion detection systems. For this purpose, an
authentication phase between CDC and the participant nodes should be implemented in
order to solve this problems depicted in Figure 2. In addition, the participant node may
become malicious. In fact, the intruder wil send a malicious file that can have an access
to the machine then set up tools and controls the transactions. Thus, a fake commit or
rollback may be issued by the intruder when the participant node is compromised.
Consequently, critical data may be lost.
5. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
17
Figure 2. Authentication between the CDC and participant node
4. THE MARKING SCHEME FOR TRANSACTION-BASED APPLICATIONS
This section introduces the marking scheme and gives an overview of the semantics and
characteristics of our dependency graph.
4.1The marking scheme for transaction-based applications in WSAN
environment
In order to perform marking intrusion in a transaction based WSAN environment, a set of
information are collected about running transactions and a generation of dependency graph. The
following set of structures and modules, which are depicted in Figure 3 as cited in [3] are
described as follows:
6. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
18
Figure 3 The proposed system architecture
ï· Structures: In this part we introduce the structures needed for the proposed marking
scheme .
o Transactionâs status table: Information that keeps for each transaction the
appropriate information about their sub transaction (itâs located at the SVC side).
Each transaction at this table includes these fields: transaction-id, sub
transactionid, processing node-id, decision. The decision field may be legitimate
or malicious and the transaction status indicate if it is committed locally or
globally or rolled back. Moreover, this table includes entries related to a
transaction which its processing has been deleted after a period of time fixed by
the SVC.
o Transaction dependency graph (TDG): The TDG is a directed acyclic graph that
is formed by nodes and links. The nodes in the proposed graph does not only
represent the transaction identifier but also processing nodes and files. This graph
includes in addition to the nodes and transactions two different links: lines
between nodes represent parental relationships and dashed links represent the set
of requested data in addition to other kind of operations that are performed on
them such as read (R) and write (W) operations. Also, we can find the
transactions execution schedule.
ï· Modules: In this section, we will illustrate the modules needed for our proposed scheme .
o Violation Rules Monitor (VRM): This component is available at both sides: the
SVC and the PNs. It monitors incoming request against available rules . At the
SVC, the VRM checks the incoming requests against the set of rules then adds its
decision in the transactionâs status table. At the processing node, the VRM
checks the incoming sub transaction against a set of rules, inserts in the mark a
set of fields and forwards it to the marker
7. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
19
o Rule Generator (RG): The rule database contains dynamic and static rules. The
static rules are initially produced in the database. Otherwise, the dynamic rules
are generated after detecting a new malicious activity. During the analysis of the
request, the VRM can detect breaches that are transparent to the SVC and PNs
sides. Thus, new dynamic rules are produced by the RG.
o Request Handler (RH): Itâs located at the SVC side, it determines nodes that are
needed to process sub transactions and addresses each sub transaction to the
adequate PNs for processing. In addition, it affects these information to the
transactionâs status table. Marker: This component is available at both sides: the
SVC and the PNs. Its basic function is the generation of the mark. It identifies the
TDG components by collecting the information at the level mark such as the
decision, the identifier of transaction and the PN. After performing the decision,
the marker inserts in the SD mark the TDG in addition to the mark.
o Dependency Checker (DC): It determines the dependency between the set of
running transactions. These dependencies may be an input/output relationships,
shared access to data, or execution of several sub transactions by the same PN.
These dependencies are written in a specific format that includes several fields
such as the identifier of transaction and the type of transaction dependency.
Moreover, it analyses a set of transactions in order to return the order of
execution of transactions in addition to the order of elementary operations in each
transactions.
o The Graph Generator (GG): It generates the TDG by considering the dependency
performed by the DC. An example of a transaction dependency graph is
described by the Figure 4. Figure 4 illustrates an example of a TDG. Indeed, this
graph is composed of 5 nodes. The transaction T1 is the parent of T2 that will be
executed after T1 and before T3 which depends on T2 and will be executed on
the SD3. Another case is presented in this example: T5 depends on T4 that
performs a read execution in SD3 and a write execution in SD2. Therefore, the
label w.8.D3 means that the transaction T5 will be executed after the read of T4
in SD3. According to this graph, several rules are generated. For each rule, all
malicious combinations related to the processed transactions are marked in the
SVC side. According to this format the decision of the SVC is not only limited to
the decision of processing nodes but it take into account the dependency between
sub transactions and the elementary requests between these running sub
transactions. Rules are generated by following a set of steps that starts by
analyzing the processed transaction, three rules are generated. For each rule, all
malicious combinations should be added to the available detection rules.
8. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
20
Figure 4. Transaction dependency graph
4.2. Marking scheme
In this section, we illustrate the intrusion marking in addition to the illustration of the mark.
ï· The mark structure: In order to mark attacks against a transaction-based applications in
WSAN environment, a decision of marking is made by the SVC and the PNs based on the
content of the transaction dependency graph. In the transaction-based traffic, the marking
strategy will be applied during the processing phase for a transaction. When receiving the
transaction, the SVC divides the transaction into sub transactions. The RH determines nodes
that are needed to process sub transactions and addresses each sub transaction to the adequate
processing nodes for processing. Then, it affects these information to transactionâs status
table by inserting several fields such as: transaction-id, sub transaction-id and the id of
processing nodes. When the processing nodes receive sub transactions, the VRM checks the
incoming sub transactions against a set of rules. After that, the VRM inserts its decision about
the sub transaction and the PN, the id of the sub transaction and the id of the PN in addition to
the timestamp in order to prevent the replay attack. Then, it sends a secured notification to the
SVC informing its decision. In the other hand, the SVC checks the TDG in order to take a
decision about running transactions and updates it. The marker inserts the mark in the SD
mark considering the decision of the VRM and the TDG. After the initiation phase, the SVC
may have a decision about several sub transactions. Therefore, in order to facilitate the task to
the PN, it forwards the decision, the operation and the id of sub transactions that have the
relation with the related sub transaction. So, the mark structure will be in this form:
((decision; operation; id transaction); id transaction). In the other hand, we treat the case of
shared access to data and level of priority. It can be presented when n transactions will access
to the same data with the same or different PNs:
o With the same PN: The SVC sends to the PN {(operation; id transaction; file; *)}. * means
that this chain can be repeated n times and the order of appearance of these chains is the
schedule of the transactions execution.
o With different PNs: The SVC will send {[#]; op; id transaction; file}. # means that the
adequate PN will take the decision about the transaction considering the dependency
relationships and the decision.
9. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
21
4.3.Communication model
We need the communication model in our proposed scheme for the communication between the
SVC and the PNs for sending and receiving notifications. Indeed, when the PN deduces the
nature of sub transaction (malicious/legitimate), it sends a notification to the SVC informing it of
its decision. In the other hand, after performing its decision, the SVC notifies the processing node
about its decision. The notifications sent and received may be attacked by an intruder who can
change the decision or the id of the transactions and the related sub transactions. For that purpose,
a generation and distribution of encryption keys is presented in both ways. In fact, when the SVC
or the PN will send their decision, the SVC or the PN initiates the generation of keys during the
notification phase. Then, the SVC or the PN sends it in an encrypted message using the public
key extracted from the participant node or SVC certificate
4.4. An intrusion detection and marking transactions in a cloud of database
environment
In this section, we will adopt and enhance the idea proposed in the next section in order to detect
intrusion and mark transaction.
At the participant node side: When receiving the sub transaction, the participant node checks it
against three elements: source, size and encoding.
Source: In order to protect the data, the participant node grants access to the database based on
the provenance IP address of each sub transaction. The participant node specify which IP address
ranges are allowed. However, if a client wants to electicly grant access to the database, the
participant node checks the originating IP address of the request against the full set of rules: if the
IP address of the request is within one of the ranges specified in the set of database rules then the
connection is allowed to the database. Else if the IP address is not within one of the ranges
specified in the set of database rules then the participant node rolled back the sub transaction and
notifies the CDC by informing it that the connection is failed as shouwn in Figure 5.
Figure 5. Source identification
Size: When receiving the sub transaction, the participant node may realize that the size of the
request will not be supported by the server. Therefore, the participant node will rollback the sub
10. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
22
transaction. A memory thresholds must be defined in order to allow for incoming transactions to
manage those that can be described as reliable against these quotas
Encoding: To prevent from the SQL injection, when receiving the sub transaction, the participant
node checks it and when the request is encoded, the participant node notifies the CDC informing
it that the sub transaction is malicious. For consequence, the participant node rolled back the sub
transaction. In the Figure 6, an example of SQL injection attack.
Figure 6. SQL injection attack
At the CDC side: When receiving the transaction, the CDC divises the transaction into sub
transactions. The RH determines the data center needed in order to affect each sub transaction to
the adequate one. Before forwarding the sub transactions to the data centers, the VRM checks it
against a set of rules. After that, the VRM inserts its decision. Two cases are proposed:
o If the sub transaction is malicious then the marker inserts the mark in the SD mark
considering the decision of the VRM.
o If the sub transaction is legitimate:
ï§ In Figure 7, a malicious action in sub transaction t1affects in the decision of the CDC. In fact,
the VRM checks the sub transaction t2against the set of rules and concludes that is legitimate.
Even if its first decision is legitimate, it will be considered as malicious because it depends on
t1, in addition to the elementary request. However if t2depends on t1, t1is malicious and makes
a write operation, and the first decision of the CDC on the sub transaction is legitimate and
the elementary operation is read or write operation then the final decision of the CDC is
malicious.
11. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
23
Figure 7. Write operation for dependent transactions
ï§ In this example , we have two sub transactions t2 and t3. We assume that t3 depends on t2
where t2 is malicious and makes a read operation. The first decision of the CDC about the sub
transaction t3 is legitimate and the elementary operation is read or write. Therefore, the final
decision of the CDC is legitimate because even if t2 is malicious that does not mean that it
affects the CDS as shown in Figure 8.
Figure 8. Read operation for dependent transactions
ï§ According to the TDG, the parental relationships exist. Indeed, in the Figure we have T1is
parent of t2and t3. If T1 is malicious then t2 and t3 are malicious. This relationship is available
for a transaction and their related sub transactions as described in Figure 9.
Figure 9. Parent-Child relationships
o After this first phase, the CDC forwards the legitimate sub transactions to the adequate data
center. The particioant node will check it against three elements: the source, size and
encoding. Then it sends a secured notification to the CDC to inform its decision. We suppose
that the decision of the participant node about the sub transaction is malicious and this
detecting is a new malicious activity. We know also that the rule database contains dynamic
12. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
24
rules. Then, the RG updates the set of rules by adding these breaches that will be transparent
to the CDC side.
4.5. Case Study
This section illustrates the intrusion detection and marking transaction in a cloud of databases
environment through a case study that describe the architecture of the proposed marking scheme
as shown in Figure 10.
Figure 10. The proposed architecture for CDC and Participant nodes
In order to illustrate the intrusion detection and the marking transaction in a cloud of database
environment, we propose a cloud of databases environment which include a set of components
that provide service for clients. This system consists of two branches of the bank like described in
[4].
A transaction is sent by the CDC to be processed. A client wants to move his account from bank
A to bank B. A transaction T is sent to the CDC. The CDC divises the transaction T into two sub
transactions {t1,ât2}. The RH determines the data center needed .Then, it affects these information
to transactionâs status table by inserting several fields such as: transaction-id, sub transaction-id
and the data center needed. The VRM checks the sub transactions t1 and t2 against a set of rules.
The dependency checker determines the dependency between the running sub transactions. In our
example, t2 depends on t1 as shown in Figure 11 because:
t1â=â{o1â=â read old accaount, o2â=âdelete old account}
t2â=â {write o1}
13. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
25
Figure 11. The dependency relationships in the case study
After that, the dependency checker specifies thatâs an input output dependency. Then, the graph
generator generates the TDG by considering the relationships between transactions as presented
in Figure 12.
Figure 12. Transaction dependency graph
If t1 and t2 are legitimate, the CDC sent the two sub transactions consecutively to the participant
node 1 and the participant node 2 in order to check them against the three elements: source, the
size and encoding. Therefore, the participant nodes notify the CDC informing it about the
decision taken. Otherwise, if t1 is malicious and makes a write operation that is delete old account
then t2 will be considered as malicious considering the VRM and the TDG. Finally, the marker
will mark both of the sub transactions as malicious.
5. SIMULATION OF WSAN AND CLOUD OF DATABASES ENVIRONMENTS
This section illustrates the simulation environment and describes the simulation model that will
be evaluated and used to evaluate the efficiency of our marking and intrusion tolerance schemes
in WSAN and cloud of databases environments.
14. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
26
5.1Simulation Model
In WSAN environment: In this section, we describe the scenario used to simulate the
performance of our marking intrusion scheme. We consider a WSAN environment that integrates
an SVC, six PNs and three SDs. When receiving transactions, the SVC divides the transactions
into sub transactions and affects each sub transaction to the adequate PNs for processing.
Problems of coverage or lack of resources can be presented when transactions are in process. In
this case, the SVC initiates a selection phase of new SVC from available nodes. The selection of a
new SVC is based on performance criteria including a free processing unit, free memory and a
high signal level. Finally, the old SVC will inform the rest of the processing nodes about the
assignment of the new SVC.
The SVC must be able to solve these problems in order to ensure and guarantee the processing of
running transactions. The objective of our simulations is to prove the efficiency of our approach
to detect malicious transactions then to mark it. The conditions could be summarized when we
have malicious transactions that affect the whole system without being detected. The simulation
scenario includes the definition of dependency between transactions that will be processed by the
WSAN components. In addition, the communication model between the SVC, PNs and different
distributions needs to be defined in order to illustrate the behavior of the proposed scheme as
mentionned in [3].
In a cloud of databases environment: In this section, we describe the scenario used to simulate
the performance of our marking and intrusion tolerance scheme.
We consider a cloud of databases environment that integrates an CDC and two data centers that
include on their side the participant nodes and the CDCs. The algorithm 1 as given below is used
to generate the files containing in the CDC as cited in [5].
Map<File, mxCell> storageMap = new HashMap<>();{ CloudDatabaseStorage[]
storage = new CloudDatabaseStorage[] { new CloudDatabaseStorage(new File("F1"), new
File("F2")), new CloudDatabaseStorage(new File("F3"), new File("F4"), new File("F5")), new
CloudDatabaseStorage(new File("F6")), new CloudDatabaseStorage(new File("F7"), new
File("F8"), new File("F9")) };for (CloudDatabaseStorage cds : storage)for (File f :
cds.getFiles()) storageMap.put(f,(mxCell)
transactionGraph.insertVertex(transactionGraphParent, f + "", f, 0, 0, 60, 40)); }
Algorithm 1. Generating Files
The TDG generates rules in a way that take into consideration the transaction dependency and the
dependency between subtransactions in addition to relationships between the parent and child
transactions and relationships between different transactions. In the algorithm 2, the dependency
and the parental relationships are described as follow.
Map<SubTransaction, mxCell> transactionMap = new LinkedHashMap<>();
Set<SubTransaction> transactionKeys = transactionMap.keySet();{ Document transactionDoc =
DocumentBuilderFactory.newInstance().newDocumentBuilder() .parse(new File(args[0]));
transactionDoc.getDocumentElement().normalize();NodeList transactionFlow =
transactionDoc.getElementsByTagName("SubTransaction");for (int i = 0; i <
transactionFlow.getLength(); i++) {Element subTransactionTag = (Element)
transactionFlow.item(i);String id = subTransactionTag.getAttribute("id");SubTransaction
15. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
27
subTransaction = SubTransaction.findSubTransactionById(transactionKeys, id);if
(subTransaction == null) subTransaction = new SubTransaction(id);int j = 0;for (NodeList
operationsFlow = ((Element)
subTransactionTag.getElementsByTagName("Operations").item(0))
.getElementsByTagName("Operation"); j < operationsFlow.getLength(); j++) {Element
operationTag = (Element) operationsFlow.item(j);new Operation(subTransaction,
operationTag.getAttribute("id"),Operation.Action.valueOf(operationTag.getElementsByTagName
("Action").item(0).getTextContent()),CloudDatabaseStorage.findFileByName(storageMap.keySet
(), operationTag.getElementsByTagName("File").item(0).getTextContent())); }Element
childrenTag = (Element) subTransactionTag.getElementsByTagName("Children").item(0);if
(childrenTag != null) {NodeList children =
childrenTag.getElementsByTagName("ChildTransaction");for (j = 0; j < children.getLength();
j++) {id = ((Element) children.item(j)).getAttribute("id");SubTransaction child =
SubTransaction.findSubTransactionById(transactionKeys, id);if (child == null) { child = new
SubTransaction(id); transactionMap.put(child, null); }subTransaction.addChild(child); } }if
(transactionMap.containsKey(subTransaction))
transactionMap.remove(subTransaction);transactionMap.put(subTransaction, (mxCell)
transactionGraph.insertVertex(transactionGraphParent, subTransaction + "", subTransaction, 0,
0, 60, 40, "shape=ellipse;perimeter=ellipsePerimeter")); } }
Algorithm 2. The dependency and the parental relationships
The objective of our simulation is to prove the efficiency of our approach to ensure the
serialization of running transactions.
5.2. Simulation Results
In this section, we aim to evaluate the efficiency of the proposed marking scheme in addition to
the intrusion detection capabilities based on the generated marks.
In WSAN environment
In this section, we present our approach to perform the simulation work and obtain the
experimental results. We aim to evaluate the efficiency of the proposed marking scheme in
addition to the intrusion detection capabilities as described in [3].
The intrusion detection capabilities for the proposed scheme: The detected malicious
transactions depend on the available detection rules, the complexity of the dependency scheme
between sub transactions in addition to the number of transactions to be processed by the WSAN
components. Indeed, the available detection rules may increase with the dependency relationship
of sub transactions and as a result, the number of detected malicious transaction increases. In this
simulation, we have considered two types of detection rules: static and dynamic. Indeed, in the
static mode, the available detection rules are a set of rules that specify a set of malicious actions
on the defined resources during the simulation. Otherwise, the dynamic available rules may be
attached to new added resources or related to actions on dependent transactions. Therefore,
dynamic rules are generated during the simulation by the increase of the number of transactions
in addition to the dependency between sub transactions. Figure 13 illustrates the variation of the
number of malicious transactions detected in function of the total number of the malicious
transactions that are operated in the system, including the legitimate transactions and the
malicious ones in addition to the dependency relationships between them. Moreover, Figure 13
shows the comparison between the number of detected malicious transactions when using
dynamic available rules and the number of the detected malicious transactions with static
16. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
28
available rules. Indeed, the first curve represents the number of malicious transactions among all
the transactions that are operated in the system. In the other hand, the second curve and the third
curve represent the number of the detected malicious transactions compared to the total number
of malicious ones. Moreover, the number of the detected malicious transactions depends on the
type of available rules, which is either dynamic or static. As depicted in Figure 13, 30 malicious
transactions are detected using available dynamic rules while the number of malicious
transactions detected with static rules are 24. Also, when we have 10 transactions, we can detect
just 2 malicious transactions with static rules. Otherwise, 4 malicious transactions with dynamic
rules. Therefore, this observation illustrates the effectiveness of the dynamic rules regarding the
detection of malicious transactions since the number of the detected malicious transactions is
more important than the one obtained with static rules.
Figure 13. Malicious transactions detected compared to the total of malicious transactions
Evaluation of the intrusion detection capabilities based on the marking scheme: In this
simulation, we study the malicious transactions detected with marking support compared to the
malicious transactions detected without marking. When receiving the transaction, the SVC checks
it against a set of rules. After that, it verifies the dependency between the transactions in order to
take a decision about running transactions. Then, the SVC inserts its decision and forwards each
sub transaction to the adequate PNs. When the PNs receive sub transactions, they check them
against a set of rules. Upon the verification, the PNs insert their decision about the verified sub
17. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
29
transactions as well as both ids of the considered sub transaction and PN. Then, a notification is
sent to the SVC informing the PNâs decision. In this simulation, we can remark that the use of
marking scheme can increase the number of detected intrusions. The number of malicious
transactions detected with marking support are greater than the number of malicious transactions
detected without marking support when using the dynamic detection rules. Therefore, we can
conclude that the marking support enhances the detection capabilities of the proposed scheme. In
our simulation, for 36 malicious transactions, we have detected 32 malicious transactions with
marking support and 30 without marking. Figure 14 illustrates the efficiency of our proposed
marking concept in detecting malicious transactions. Indeed, Figure 14 presents three curves:
Firstly, the grey curve shows the total number of malicious transactions. Secondly, the orange
curve presents the number of malicious transactions detected without marking support. Therefore,
comparing with the first curve, we can notice that the ratio between the total number of malicious
transactions and the detected malicious transaction is important. Finally, the blue curve presents
the malicious transactions detected with marking support. In fact, the blue curve approaches to
the orange one and consequently, we can approve the efficiency of the marking approach.
Figure 14. Detection capabilities with/ without marking
In a cloud of databases environment:
The transaction dependency graph: In this simulation we have presented a transaction
dependency graph in Figure 15.
18. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
30
Figure 15. Transaction dependency graph
T7 will do a read on a file 2 after the write of the subtransaction T6 on file 2. The TDG generates
rules in a The Figure 15 illustrates the simulation result of the TDG. Indeed, this graph is
composed of seven nodes that describe the subtransactions. The subtransactionT1 is the parent T2
that will be executed after T1 and before T3 which depends on T2 and will be executed on file 6.
Another case can be presented is T7 depends on T6 has a write execution in file 2 and a read
execution in file 7. Therefore the label R,12 means that the subtransaction way that take into
consideration the transaction dependency and the dependency between subtransactions in addition
to relationships between the parent and child transactions and relationships between different
transactions [5].
Marking intrusion: A simulation result is depicted in Figure 16. In Figure 16, the
subtransactions T3 to T7 are colored in red. In fact, the subtransactions T3 and T7 are the first
malicious subtransactions according to the set of rules. Then, with the help of the TDG the other
subtransactions became also malicious as depicted in Figure 16 and described in the Algrorithm
3. In the Figure 16, T3 is the parent of T4 and T6, T3 is malicious then T4 and T6 will become
malicious. T5 will read the file 8 updated by T4. Therefore, the subtransaction T5 will become
malicious.
19. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
31
Figure 16. Marking intrusion
public void markAsMalicious(SubTransaction[] subTransactions) {malicious = true;for
(SubTransaction child : children) child.malicious = true;int i = 0; while (i <
subTransactions.length && subTransactions[i++] != this) ;for (int j = i + 1; j <
subTransactions.length; j++) for (Operation nextOp : subTransactions[j].operations)for
(Operation op : operations)if (nextOp.getTarget() == op.getTarget())
subTransactions[j].markAsMalicious(subTransactions); }public boolean
searchMaliciousDependency(SubTransaction[] subTransactions) {for (int i = 0; i <
subTransactions.length && subTransactions[i] != this; i++)if (subTransactions[i].malicious)
for (Operation prevOp : subTransactions[i].operations)for (Operation op : operations)if
(prevOp.getTarget() == op.getTarget()) return true;return false; }
Algorithm 3. Intrusion detection and marking transactions
6. CONCLUSION
We have proposed a marking scheme of Transaction-based Applications in Wireless Storage Area
Networks that is mainly based on a transaction dependency graph. We have adopted and
enhanced this scheme in order to detect intrusion and mark transactions in a cloud of databases
environment. The proposed scheme introduces the intrusion detection and marking transactions
by using at first the rules set and the TDG. Secondly, the use of the elements at the side of the
participant nodes. The enhancement was by including additional elements related to the
processing environment in the cloud of databases. For future research, we project to find the
identity of the user. Moreover, we enhance the protection of the mark. Another perspective to this
work consists in studying the intrusion tolerance with marking support for interconnected
databases.
20. International Journal on Cloud Computing: Services and Architecture (IJCCSA) Vol. 6, No. 5, October 2016
32
REFERENCES
[1] Kamra A Terzi E Bertino E, Vakali A. Intrusion Detection in RBAC-Administered Database. In
Proceeding of the 21st annual computer security application conference (ACSAC), pages 170-182,
2005.
[2] M. Gertz C. Y. Chung, K. Levitt. DEMIDS: A Misuse Detection System for Database Systems. In
Proceedings of the Integrity and Internal Control in Information System, Pages 159-178, 1999.
[3] S.Chatti, H.ounelli. Marking Itrusion of transaction-based applications in WSAN . Network-Based
Information Systems (NBiS), 16th International Conference on, 2013.
[4] S.Chatti, H.Ounelli. Transaction Management in WSAN Cloud Environment. 17th International
Conference on Network Based Information Systems (NBiS), Pages 525-530, 2014.
[5] S.Chatti, H.ounelli. An Intrusion Tolerance Scheme for a Cloud of Dtatabases environment. 19th
International Conference on Network-Based Information Systems (NBiS), 2016.
[6] Sural S Srivastava A, Majumdar AK. Database Intrusion Detection Using Weighted Sequence
Mining. Journal of Computers, vol. 1, no. 4, 2006.
[7] B. Panda Y. Hu. A data mining approach for database intrusion detection. In Proceedings of the ACM
Symposium on applied computing, pages 711-716, 2004.
[8] Qin X Zhong Y. Database intrusion detection based on user query frequent item sets mining with
constraints. In Proceeding of the third international conference on information security, pages224-
225, 2004.
Authors
Syrine Chatti : PH.D student in Faculty of Sciences of Tunis
Habib Ounelli: Professor in Faculty of Sciences of Tunis