This document summarizes a research paper that proposes a framework called CACMAN to address issues with providing certificate authority (CA) services in mobile ad hoc networks (MANETs). CACMAN aims to minimize packet overhead and maximize availability of CA services by allowing clients to cache and share certificates collaboratively. It compares CACMAN to the existing MOCA framework, which relies on threshold cryptography across multiple CA nodes. The document outlines CACMAN's certificate request and response process between clients using local caching and broadcasting, with the goal of reducing load on CA servers and improving availability without compromising security.
This document proposes an Expedite Message Authentication Protocol (EMAP) for vehicular ad hoc networks (VANETs) that aims to significantly decrease message loss ratio due to message verification delay compared to conventional authentication methods using certificate revocation lists (CRLs). EMAP replaces the time-consuming CRL checking process with a more efficient revocation checking process using hash-based message authentication codes (HMACs) shared only between valid vehicles. It also uses a novel probabilistic key distribution method to securely share and update keys.
Emap expedite message authentication protocol for vehicular ad hoc networksIEEEFINALYEARPROJECTS
EMAP is an expedited message authentication protocol proposed for vehicular ad hoc networks (VANETs). It replaces the time-consuming process of checking large certificate revocation lists (CRLs) with a more efficient revocation checking process using hashed message authentication codes (HMACs). Only non-revoked vehicles can securely share and update the secret HMAC key. EMAP significantly reduces message loss due to authentication delays compared to conventional CRL-based authentication methods. It provides security properties like entity authentication, message integrity, and resistance to colluding attacks, while enabling fast authentication through its novel key distribution and revocation checking scheme.
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
This document describes RET-WS, a framework for providing Byzantine fault tolerance for web services. RET-WS is built on top of the SOAP messaging framework for interoperability. It uses the Castro and Liskov BFT algorithm with some modifications to allow for more efficient resource utilization. The performance evaluation shows RET-WS incurs only moderate overhead compared to the complexity of the Byzantine fault tolerance mechanisms.
The document proposes a rapid and reliable receiver-based approach for delivering warning messages in vehicular ad-hoc networks. It selects the best receiver node based on both location and energy to ensure timely propagation of warnings without delay. The approach ranks potential receiver nodes based on their distance to an ideal forwarding location and remaining energy. It also uses epidemic routing to further improve performance, where messages are replicated across mobile nodes to increase the probability of reaching destinations. Simulation results showed the proposed method achieved high reliability, enhanced timeliness, and higher delivery ratios with lower overhead compared to existing solutions.
A New QoS Renegotiation Mechanism for Multimedia ApplicationsABDELAAL
The document proposes an adaptive QoS architecture that uses call rejection notifications as feedback to capture network characteristics and allow multimedia applications to renegotiate QoS requirements dynamically. It aims to improve on existing static and dynamic QoS approaches by using flow-based traffic monitoring and a feedback mechanism with low overhead. Simulation results show the approach admits more calls, improves QoS parameters, and decreases call processing time compared to other methods.
This document provides summaries of 15 networking projects from TTA including the project code, title, description, and reference. The projects cover topics like delay analysis of opportunistic spectrum access MAC protocols, load balancing for network traffic measurement, key exchange protocols for parallel network file systems, anomaly detection in intrusion detection systems, and energy efficient group key agreement for wireless networks. The document provides contact information at the end for obtaining full project papers.
The main problem is to avoid the complexity of retrieving the video content without streaming problem in multi network clients. The proposed work is to improve Collaboration among streaming contents on server resources in order to improve the network performance. Implementing network collaboration on a content delivery scenario, with a strong reduction of data transferred via servers. Audio and video files are transmitted in blocks to clients through the peer using the Network Coding Equivalent Content Distribution scheme. The objective of the system is to tolerate out-of-order arrival of blocks in the stream and is resilient to transmission losses of an arbitrary number of intermediate blocks, without affecting the verifiability of remaining blocks in the stream. To formulate the joint rate control and packet scheduling problem as an integer program where the objective is to minimize a cost function of the expected video distortion. Suggestions of cost functions are proposed in order to provide service differentiation and address fairness among users.
This document proposes an Expedite Message Authentication Protocol (EMAP) for vehicular ad hoc networks (VANETs) that aims to significantly decrease message loss ratio due to message verification delay compared to conventional authentication methods using certificate revocation lists (CRLs). EMAP replaces the time-consuming CRL checking process with a more efficient revocation checking process using hash-based message authentication codes (HMACs) shared only between valid vehicles. It also uses a novel probabilistic key distribution method to securely share and update keys.
Emap expedite message authentication protocol for vehicular ad hoc networksIEEEFINALYEARPROJECTS
EMAP is an expedited message authentication protocol proposed for vehicular ad hoc networks (VANETs). It replaces the time-consuming process of checking large certificate revocation lists (CRLs) with a more efficient revocation checking process using hashed message authentication codes (HMACs). Only non-revoked vehicles can securely share and update the secret HMAC key. EMAP significantly reduces message loss due to authentication delays compared to conventional CRL-based authentication methods. It provides security properties like entity authentication, message integrity, and resistance to colluding attacks, while enabling fast authentication through its novel key distribution and revocation checking scheme.
A Novel Approach for Efficient Resource Utilization and Trustworthy Web ServiceCSCJournals
This document describes RET-WS, a framework for providing Byzantine fault tolerance for web services. RET-WS is built on top of the SOAP messaging framework for interoperability. It uses the Castro and Liskov BFT algorithm with some modifications to allow for more efficient resource utilization. The performance evaluation shows RET-WS incurs only moderate overhead compared to the complexity of the Byzantine fault tolerance mechanisms.
The document proposes a rapid and reliable receiver-based approach for delivering warning messages in vehicular ad-hoc networks. It selects the best receiver node based on both location and energy to ensure timely propagation of warnings without delay. The approach ranks potential receiver nodes based on their distance to an ideal forwarding location and remaining energy. It also uses epidemic routing to further improve performance, where messages are replicated across mobile nodes to increase the probability of reaching destinations. Simulation results showed the proposed method achieved high reliability, enhanced timeliness, and higher delivery ratios with lower overhead compared to existing solutions.
A New QoS Renegotiation Mechanism for Multimedia ApplicationsABDELAAL
The document proposes an adaptive QoS architecture that uses call rejection notifications as feedback to capture network characteristics and allow multimedia applications to renegotiate QoS requirements dynamically. It aims to improve on existing static and dynamic QoS approaches by using flow-based traffic monitoring and a feedback mechanism with low overhead. Simulation results show the approach admits more calls, improves QoS parameters, and decreases call processing time compared to other methods.
This document provides summaries of 15 networking projects from TTA including the project code, title, description, and reference. The projects cover topics like delay analysis of opportunistic spectrum access MAC protocols, load balancing for network traffic measurement, key exchange protocols for parallel network file systems, anomaly detection in intrusion detection systems, and energy efficient group key agreement for wireless networks. The document provides contact information at the end for obtaining full project papers.
The main problem is to avoid the complexity of retrieving the video content without streaming problem in multi network clients. The proposed work is to improve Collaboration among streaming contents on server resources in order to improve the network performance. Implementing network collaboration on a content delivery scenario, with a strong reduction of data transferred via servers. Audio and video files are transmitted in blocks to clients through the peer using the Network Coding Equivalent Content Distribution scheme. The objective of the system is to tolerate out-of-order arrival of blocks in the stream and is resilient to transmission losses of an arbitrary number of intermediate blocks, without affecting the verifiability of remaining blocks in the stream. To formulate the joint rate control and packet scheduling problem as an integer program where the objective is to minimize a cost function of the expected video distortion. Suggestions of cost functions are proposed in order to provide service differentiation and address fairness among users.
IRJET- HHH- A Hyped-up Handling of Hadoop based SAMR-MST for DDOS Attacks...IRJET Journal
This document proposes a novel scheme called SAMR-MST to detect DDoS attacks using Hadoop's MapReduce framework more efficiently. It introduces the SAMR (Self-Adaptive MapReduce) scheduling algorithm, which uses historical task performance data to identify slow tasks and launch backup tasks. It then enhances SAMR with Minimum Spanning Tree clustering to tune SAMR's parameters, improving its ability to find slow tasks. The proposed approach is evaluated against existing MapReduce schedulers like FIFO and LATE, showing it can reduce execution time by up to 25% in heterogeneous cloud environments subject to DDoS attacks.
This is a talk I gave on patterns and antipatterns of SOA, based on my understandings and practices and inspired by Ron Jacobs famous webcast by the same name.
IRJET- An Efficient Dissemination and Dynamic Risk Management in Wireless Sen...IRJET Journal
This document proposes a risk assessment framework for wireless sensor networks (WSNs) deployed in a sensor cloud. The framework utilizes a distributed approach to code dissemination, allowing multiple authorized users to directly update sensor node code images without a base station. A secure and efficient proxy signature technique is used to satisfy requirements like integrity, freshness, resistance to denial-of-service attacks, and support for different user privileges. Seven potential attacks on the system are identified and analyzed to assess their impact level. The framework generates a PDF report with risk levels in different regions and solutions to overcome identified risks.
A NEW ARCHITECTURE PROPOSAL TO INTEGRATE OPC UA, DDS & TSN.
Suppliers and end users need a complete solution to address the complexity of future industrial automation systems. These systems require:
• Interoperability to allow devices and independent software applications from multiple suppliers to work together seamlessly
• Extensibility to incorporate future large or intelligent systems
• Performance and flexibility to handle challenging deployments and use cases
• Robustness to guarantee continuity of operation despite partial failures
• Integrity and fine-grained security to protect against cyber attacks
• Widespread support for an industry standard
This document proposes a new technical architecture to build this future. The design combines the best of the OPC Unified Architecture (OPC UA), Data Distribution Service (DDS), and Time-Sensitive Networking (TSN) standards. It will connect the factory floor to the enterprise, sensors to cloud, and real-time devices to work cells. This proposal aims to define and standardize the architecture to unify the industry.
IRJET- Improvement of Security and Trustworthiness in Cloud Computing usi...IRJET Journal
The document discusses using fuzzy logic to improve security and trustworthiness in cloud computing. It proposes a cloud services trust model (CSTM) that uses parameters like response time, cost, security, throughput, and speedup ratio to evaluate the trustworthiness of cloud services. The model assesses cloud services from a multidimensional perspective using these quality of cloud service parameters. Practical results show the model can improve quality of cloud services and help customers select services based on their needs and financial constraints.
A survey on cost effective survivable network design in wireless access networkijcses
In today’s technology, the essential property for wireless communication network is to exhibit as a
dependable network. The dependability network incorporates the property like availability, reliability and
survivability. Although these factors are well taken care by protocol for wired network, still there exists
huge lack of efficacy for wireless network. Further, the wireless access network is more complicated with
difficulties like frequencies allocation, quality of services, user requests. Adding to it, the wireless access
network is severely vulnerable to link and node failures. Therefore, the survivability in wireless access
network is very important factor to be considered will performing wireless network designing. This paper
focuses on discussion of survivability in wireless access network. Capability of a wireless access network to
perform its dedicated accessibility services even in case of infrastructure failure is known as survivability.
Given available capacity, connectivity and reliability the survivable problem in hierarchical network is to
minimize the overall connection cost for multiple requests. The various failure scenario of wireless access
network as existing in literature is been explored. The existing survivability models for access network like
shared link, multi homing, overlay network, sonnet ring, and multimodal devices are discussed in detail
here. Further comparison between various existing survivability solutions is also tabulated.
QOS-APCVS: AN ENHANCED EPS-IMS PCC ARCHITECTURE PROPOSAL TO IMPROVE MOBILE SE...ecij
IP Multimedia Subsystem’s (IMS) presents the framework architecture which can provide multimedia services for Evolved Packet System (EPS). In busy network, the main failures are service blocking, handover outage and non satisfying QoS criteria. So we aim to improve dependability of dedicated bearer
establishment in EPS-IMS Network. In mobile access network, we consider service is available if it is admitted by base station and is reliable if it is still supported in handover position. In core network, we consider service as reliable if its QoS criteria are satisfied. So we propose a new Qos Provisioning solution. To provide new application or to support handover service in busy network, our approach preempts resources by utility factor instead of priority consideration in existing works. In addition to
bandwidth reservation our solution allows core network reservation to improve the delay of real time service and minimize the loss rate of non-real time services.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
The document defines the key characteristics that distinguish QFabric networking technology from other data center networking technologies. It outlines seven defining characteristics of QFabric including scalability, any-to-any connectivity, low latency, no packet drops under congestion, linear cost and power scaling, support for virtual networks and services, and a distributed modular implementation. These characteristics work together to provide two main capabilities - treating data center resources as fungible pools and connecting resources at high speeds with low latency.
QFabric is a networking technology designed for large-scale data centers that provides several defining characteristics including:
1) Any-to-any connectivity allowing full bandwidth sharing between interfaces without restrictions or blocking.
2) Low latency of 2-10 microseconds between interfaces that scales slowly with size and load.
3) No packet drops during congestion as traffic is throttled to match input and output rates smoothly.
4) Linear scaling of cost and power consumption as the number of interfaces increases in contrast to traditional approaches.
Internet Path Selection on Video QoE Analysis and ImprovementsIJTET Journal
Abstract— Systematically study a large number of Internet paths between popular video destinations and clients to create an empirical understanding of the location, existence, and repetition of failures. Finding ways to lower a providers costs for real-time, Internet protocol television services through a Internet protocol television architecture and through intelligent destination-shifting of selected services investigate ways to recover from Quality of Experience degradation. Using Live Television and Video on Demand as examples, we can take advantage of the different deadlines associated with each service to effectively obtain these services. Designing and implementing a prototype packet forwarding module called source initiated frame restoration. We implemented source initiated frame restoration on nodes and compared the performance of source initiated frame restoration to the default Internet routing. We found that source initiated frame restoration outperforms IP path selection by providing higher on-screen perceptual quality. These failures are mapped to the desired video quality in need by reconstructing video clips and by conducting user surveys. We can then examine ways to recover from Quality of Experience degradation by choosing one hop detour paths that preserve application specific policies. Path ranking methodology is used to find the path which contain high quality videos with low cost and occupies very low memory space. By ranking videos according to their quality, size, and cost, the top ranking videos can be retrieved by the client.
Winds of change from vendor lock-in to meta cloud review 1NAWAZ KHAN
The document proposes a "meta cloud" that would abstract away technical incompatibilities between existing cloud offerings and mitigate vendor lock-in. It would help users find the right cloud services for their use case and support initial deployment and runtime migration between clouds. The meta cloud incorporates design-time and runtime components like a unified API, resource templates, migration recipes, and a knowledge base to store provider/service data. It was presented by four students and describes the existing system, proposed meta cloud architecture and its advantages, hardware/software requirements, modules, UML diagrams, and execution slides.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
MMB Cloud-Tree: Verifiable Cloud Service SelectionIJAEMSJORNAL
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agent’s system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
A scalable and reliable matching service for content basedsyeda yasmeen
1) The document proposes SREM, a scalable and reliable event matching service for content-based publish/subscribe systems in cloud computing environments.
2) SREM connects brokers through a distributed overlay called SkipCloud to ensure reliable connectivity and low latency routing.
3) It uses a hybrid multi-dimensional space partitioning technique called HPartition to map subscriptions to multiple subspaces, balancing workloads and providing high matching throughput across servers.
The document introduces BreakingPoint Resiliency Scores, which provide standardized metrics for evaluating the performance, security, and stability of networks and data centers. The scores are calculated by subjecting devices to real-world traffic loads and security attacks. This identifies weaknesses and determines how many users a system can support without degradation. The scores provide a way to understand how changes will impact infrastructure and to optimize resources.
Quality of Service for Video Streaming using EDCA in MANETijsrd.com
Mobile Ad-hoc network(MANET) is a collection of wireless terminals that are able to dynamically form a temporary network. To establish such a network no fixed infrastructure is required. Here, it is the responsibility of network nodes to forward each other's packets and thus these nodes also act as routers. In such a network resources are limited and also topology changes dynamically. So providing Quality of service(QoS) is also necessary. QoS is more important for real time applications for example Video Streaming. IEEE 802.11e network standard supports QoS through EDCA technique. This technique does not fulfill the requirements of QoS. So, in this project modified EDCA technique is proposed to enhance QoS for Video Streaming application. This technique is implemented in NS2 and compared with traditional EDCA.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customerrequirements. The cloud users have been classified in to sub classes as per the fault olerance requirements.Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing methods.
This document reviews the biodegradability of various chlorinated solvents and aliphatic compounds. It begins with an introduction on the sources, principles, and mechanisms of biodegradation of these compounds. It then provides detailed sections on the biodegradation of specific compound categories, including chloromethanes, chloroethanes, chloroethenes, chlorofluorocarbons, chloroacetic acids, and chloropropanoids. For each category, it discusses degradation in the environment and engineered systems, as well as the relevant microbiology and biochemistry. The conclusion summarizes the different physiological approaches to biodegradation and provides an overview of biodegradability for each compound category.
IRJET- HHH- A Hyped-up Handling of Hadoop based SAMR-MST for DDOS Attacks...IRJET Journal
This document proposes a novel scheme called SAMR-MST to detect DDoS attacks using Hadoop's MapReduce framework more efficiently. It introduces the SAMR (Self-Adaptive MapReduce) scheduling algorithm, which uses historical task performance data to identify slow tasks and launch backup tasks. It then enhances SAMR with Minimum Spanning Tree clustering to tune SAMR's parameters, improving its ability to find slow tasks. The proposed approach is evaluated against existing MapReduce schedulers like FIFO and LATE, showing it can reduce execution time by up to 25% in heterogeneous cloud environments subject to DDoS attacks.
This is a talk I gave on patterns and antipatterns of SOA, based on my understandings and practices and inspired by Ron Jacobs famous webcast by the same name.
IRJET- An Efficient Dissemination and Dynamic Risk Management in Wireless Sen...IRJET Journal
This document proposes a risk assessment framework for wireless sensor networks (WSNs) deployed in a sensor cloud. The framework utilizes a distributed approach to code dissemination, allowing multiple authorized users to directly update sensor node code images without a base station. A secure and efficient proxy signature technique is used to satisfy requirements like integrity, freshness, resistance to denial-of-service attacks, and support for different user privileges. Seven potential attacks on the system are identified and analyzed to assess their impact level. The framework generates a PDF report with risk levels in different regions and solutions to overcome identified risks.
A NEW ARCHITECTURE PROPOSAL TO INTEGRATE OPC UA, DDS & TSN.
Suppliers and end users need a complete solution to address the complexity of future industrial automation systems. These systems require:
• Interoperability to allow devices and independent software applications from multiple suppliers to work together seamlessly
• Extensibility to incorporate future large or intelligent systems
• Performance and flexibility to handle challenging deployments and use cases
• Robustness to guarantee continuity of operation despite partial failures
• Integrity and fine-grained security to protect against cyber attacks
• Widespread support for an industry standard
This document proposes a new technical architecture to build this future. The design combines the best of the OPC Unified Architecture (OPC UA), Data Distribution Service (DDS), and Time-Sensitive Networking (TSN) standards. It will connect the factory floor to the enterprise, sensors to cloud, and real-time devices to work cells. This proposal aims to define and standardize the architecture to unify the industry.
IRJET- Improvement of Security and Trustworthiness in Cloud Computing usi...IRJET Journal
The document discusses using fuzzy logic to improve security and trustworthiness in cloud computing. It proposes a cloud services trust model (CSTM) that uses parameters like response time, cost, security, throughput, and speedup ratio to evaluate the trustworthiness of cloud services. The model assesses cloud services from a multidimensional perspective using these quality of cloud service parameters. Practical results show the model can improve quality of cloud services and help customers select services based on their needs and financial constraints.
A survey on cost effective survivable network design in wireless access networkijcses
In today’s technology, the essential property for wireless communication network is to exhibit as a
dependable network. The dependability network incorporates the property like availability, reliability and
survivability. Although these factors are well taken care by protocol for wired network, still there exists
huge lack of efficacy for wireless network. Further, the wireless access network is more complicated with
difficulties like frequencies allocation, quality of services, user requests. Adding to it, the wireless access
network is severely vulnerable to link and node failures. Therefore, the survivability in wireless access
network is very important factor to be considered will performing wireless network designing. This paper
focuses on discussion of survivability in wireless access network. Capability of a wireless access network to
perform its dedicated accessibility services even in case of infrastructure failure is known as survivability.
Given available capacity, connectivity and reliability the survivable problem in hierarchical network is to
minimize the overall connection cost for multiple requests. The various failure scenario of wireless access
network as existing in literature is been explored. The existing survivability models for access network like
shared link, multi homing, overlay network, sonnet ring, and multimodal devices are discussed in detail
here. Further comparison between various existing survivability solutions is also tabulated.
QOS-APCVS: AN ENHANCED EPS-IMS PCC ARCHITECTURE PROPOSAL TO IMPROVE MOBILE SE...ecij
IP Multimedia Subsystem’s (IMS) presents the framework architecture which can provide multimedia services for Evolved Packet System (EPS). In busy network, the main failures are service blocking, handover outage and non satisfying QoS criteria. So we aim to improve dependability of dedicated bearer
establishment in EPS-IMS Network. In mobile access network, we consider service is available if it is admitted by base station and is reliable if it is still supported in handover position. In core network, we consider service as reliable if its QoS criteria are satisfied. So we propose a new Qos Provisioning solution. To provide new application or to support handover service in busy network, our approach preempts resources by utility factor instead of priority consideration in existing works. In addition to
bandwidth reservation our solution allows core network reservation to improve the delay of real time service and minimize the loss rate of non-real time services.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
The document defines the key characteristics that distinguish QFabric networking technology from other data center networking technologies. It outlines seven defining characteristics of QFabric including scalability, any-to-any connectivity, low latency, no packet drops under congestion, linear cost and power scaling, support for virtual networks and services, and a distributed modular implementation. These characteristics work together to provide two main capabilities - treating data center resources as fungible pools and connecting resources at high speeds with low latency.
QFabric is a networking technology designed for large-scale data centers that provides several defining characteristics including:
1) Any-to-any connectivity allowing full bandwidth sharing between interfaces without restrictions or blocking.
2) Low latency of 2-10 microseconds between interfaces that scales slowly with size and load.
3) No packet drops during congestion as traffic is throttled to match input and output rates smoothly.
4) Linear scaling of cost and power consumption as the number of interfaces increases in contrast to traditional approaches.
Internet Path Selection on Video QoE Analysis and ImprovementsIJTET Journal
Abstract— Systematically study a large number of Internet paths between popular video destinations and clients to create an empirical understanding of the location, existence, and repetition of failures. Finding ways to lower a providers costs for real-time, Internet protocol television services through a Internet protocol television architecture and through intelligent destination-shifting of selected services investigate ways to recover from Quality of Experience degradation. Using Live Television and Video on Demand as examples, we can take advantage of the different deadlines associated with each service to effectively obtain these services. Designing and implementing a prototype packet forwarding module called source initiated frame restoration. We implemented source initiated frame restoration on nodes and compared the performance of source initiated frame restoration to the default Internet routing. We found that source initiated frame restoration outperforms IP path selection by providing higher on-screen perceptual quality. These failures are mapped to the desired video quality in need by reconstructing video clips and by conducting user surveys. We can then examine ways to recover from Quality of Experience degradation by choosing one hop detour paths that preserve application specific policies. Path ranking methodology is used to find the path which contain high quality videos with low cost and occupies very low memory space. By ranking videos according to their quality, size, and cost, the top ranking videos can be retrieved by the client.
Winds of change from vendor lock-in to meta cloud review 1NAWAZ KHAN
The document proposes a "meta cloud" that would abstract away technical incompatibilities between existing cloud offerings and mitigate vendor lock-in. It would help users find the right cloud services for their use case and support initial deployment and runtime migration between clouds. The meta cloud incorporates design-time and runtime components like a unified API, resource templates, migration recipes, and a knowledge base to store provider/service data. It was presented by four students and describes the existing system, proposed meta cloud architecture and its advantages, hardware/software requirements, modules, UML diagrams, and execution slides.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
MMB Cloud-Tree: Verifiable Cloud Service SelectionIJAEMSJORNAL
In the existing cloud brokerage system, the client does not have the ability to verify the result of the cloud service selection. There are possibilities that the cloud broker can be biased in selecting the best Cloud Service Provider (CSP) for a client. A compromised or dishonest cloud broker can unfairly select a CSP for its own advantage by cooperating with the selected CSP. To address this problem, we propose a mechanism to verify the CSP selection result of the cloud broker. In this verification mechanism, properties of every CSP will also be verified. It uses a trusted third party to gather clustering result from the cloud broker. This trusted third party is also used as a base station to collect CSP properties in a multi-agent’s system. Software Agents are installed and running on every CSP. The CSP is monitored by agents as the representative of the customer inside the cloud. These multi-agents give reports to a third party that must be trusted by CSPs, customers and the Cloud Broker. The third party provides transparency by publishing reports to the authorized parties (CSPs and Customers).
A scalable and reliable matching service for content basedsyeda yasmeen
1) The document proposes SREM, a scalable and reliable event matching service for content-based publish/subscribe systems in cloud computing environments.
2) SREM connects brokers through a distributed overlay called SkipCloud to ensure reliable connectivity and low latency routing.
3) It uses a hybrid multi-dimensional space partitioning technique called HPartition to map subscriptions to multiple subspaces, balancing workloads and providing high matching throughput across servers.
The document introduces BreakingPoint Resiliency Scores, which provide standardized metrics for evaluating the performance, security, and stability of networks and data centers. The scores are calculated by subjecting devices to real-world traffic loads and security attacks. This identifies weaknesses and determines how many users a system can support without degradation. The scores provide a way to understand how changes will impact infrastructure and to optimize resources.
Quality of Service for Video Streaming using EDCA in MANETijsrd.com
Mobile Ad-hoc network(MANET) is a collection of wireless terminals that are able to dynamically form a temporary network. To establish such a network no fixed infrastructure is required. Here, it is the responsibility of network nodes to forward each other's packets and thus these nodes also act as routers. In such a network resources are limited and also topology changes dynamically. So providing Quality of service(QoS) is also necessary. QoS is more important for real time applications for example Video Streaming. IEEE 802.11e network standard supports QoS through EDCA technique. This technique does not fulfill the requirements of QoS. So, in this project modified EDCA technique is proposed to enhance QoS for Video Streaming application. This technique is implemented in NS2 and compared with traditional EDCA.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customerrequirements. The cloud users have been classified in to sub classes as per the fault olerance requirements.Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing methods.
This document reviews the biodegradability of various chlorinated solvents and aliphatic compounds. It begins with an introduction on the sources, principles, and mechanisms of biodegradation of these compounds. It then provides detailed sections on the biodegradation of specific compound categories, including chloromethanes, chloroethanes, chloroethenes, chlorofluorocarbons, chloroacetic acids, and chloropropanoids. For each category, it discusses degradation in the environment and engineered systems, as well as the relevant microbiology and biochemistry. The conclusion summarizes the different physiological approaches to biodegradation and provides an overview of biodegradability for each compound category.
This document appears to be a collection of photos from Nicole Leigh Young's life with captions describing memories from different periods and events. It includes memories from childhood in the 1980s like time with family, birthday parties, and favorite toys and TV shows. It also includes more recent memories as an adult like graduating college, meeting her husband Drew, volunteering with charitable organizations, and spending time with friends and family through her 30s. The photos document Nicole's life journey and relationships over multiple decades.
Improving Security Features In MANET Authentication Through Scrutiny Of The C...Editor IJMTER
With changing times, the researchers fine MANET Security, a daunting task.
Authentication problems are crapping up frequently, in the Absence of well laid out of infrastructure
.The adaptability of TTP’s and non TTP’s in MANET’s becoming more difficult and impractical.
With the help of pre assigned logins on offline basis and issuance of certificates more effectively
can address with the help of Hybrid Key management Scheme on strength and use of 4G services.
The proper account of CRL status of servers was not taken into by the scheme. if it is embedded
the nodes need to check frequently the server’s CRL status for authenticating any node and place
external messages outside MANET which leads to overheads. To reduce them , we tried by going for
online MANET authority ,responsible for issuing certificates ,duly considering the CRL Status of
servers ,their renewable and key verification within the MANET, which had sufficiently reduced the
external messages.
IRJET- An Adaptive Scheduling based VM with Random Key Authentication on Clou...IRJET Journal
This document summarizes a research paper on an adaptive scheduling-based virtual machine (VM) approach with random key authentication for cloud data access. The paper proposes allocating VMs to servers in a way that flexibly utilizes cloud resources while guaranteeing job deadlines. It employs time sliding and bandwidth scaling in resource allocation to better match resources to job requirements and cloud availability. Simulations showed the approach can accept more jobs than existing solutions while increasing provider revenue and lowering tenant costs. The paper also discusses generating random keys for user authentication and reviewing related work on scheduling methods and cloud resource provisioning.
Java cluster-based certificate revocation with vindication capability for mo...ecwayerode
The document proposes a Cluster-based Certificate Revocation with Vindication Capability (CCRVC) scheme for mobile ad hoc networks. The scheme aims to isolate attackers from participating in the network by revoking their certificates. It recovers warned nodes to participate in the revocation process and proposes a threshold-based mechanism to assess if warned nodes should be vindicated as legitimate or not. Extensive simulation results demonstrate the proposed certificate revocation scheme is effective and efficient at guaranteeing secure communications in mobile ad hoc networks.
Cluster based certificate revocation with vindication capability for mobile a...Ecway Technologies
The document proposes a Cluster-based Certificate Revocation with Vindication Capability (CCRVC) scheme for mobile ad hoc networks. The scheme aims to isolate attackers from participating in the network by revoking their certificates. It recovers warned nodes to participate in the revocation process and proposes a threshold-based mechanism to assess if warned nodes should be vindicated as legitimate. Extensive simulation results demonstrate the proposed certificate revocation scheme is effective and efficient at guaranteeing secure communications in mobile ad hoc networks.
Cluster based certificate revocation with vindication capability for mobile a...ecway
Final Year IEEE Projects, Final Year Projects, Academic Final Year Projects, Academic Final Year IEEE Projects, Academic Final Year IEEE Projects 2013, Academic Final Year IEEE Projects 2014, IEEE JAVA, .NET Projects, 2013 IEEE JAVA, .NET Projects, 2013 IEEE JAVA, .NET Projects in Chennai, 2013 IEEE JAVA, .NET Projects in Trichy, 2013 IEEE JAVA, .NET Projects in Karur, 2013 IEEE JAVA, .NET Projects in Erode, 2013 IEEE JAVA, .NET Projects in Madurai, 2013 IEEE JAVA, .NET Projects in Salem, 2013 IEEE JAVA, .NET Projects in Coimbatore, 2013 IEEE JAVA, .NET Projects in Tirupur, 2013 IEEE JAVA, .NET Projects in Bangalore, 2013 IEEE JAVA, .NET Projects in Hydrabad, 2013 IEEE JAVA, .NET Projects in Kerala, 2013 IEEE JAVA, .NET Projects in Namakkal, IEEE JAVA, .NET Image Processing, IEEE JAVA, .NET Face Recognition, IEEE JAVA, .NET Face Detection, IEEE JAVA, .NET Brain Tumour, IEEE JAVA, .NET Iris Recognition, IEEE JAVA, .NET Image Segmentation, Final Year JAVA, .NET Projects in Pondichery, Final Year JAVA, .NET Projects in Tamilnadu, Final Year JAVA, .NET Projects in Chennai, Final Year JAVA, .NET Projects in Trichy, Final Year JAVA, .NET Projects in Erode, Final Year JAVA, .NET Projects in Karur, Final Year JAVA, .NET Projects in Coimbatore, Final Year JAVA, .NET Projects in Tirunelveli, Final Year JAVA, .NET Projects in Madurai, Final Year JAVA, .NET Projects in Salem, Final Year JAVA, .NET Projects in Tirupur, Final Year JAVA, .NET Projects in Namakkal, Final Year JAVA, .NET Projects in Tanjore, Final Year JAVA, .NET Projects in Coimbatore, Final Year JAVA, .NET Projects in Bangalore, Final Year JAVA, .NET Projects in Hydrabad, Final Year JAVA, .NET Projects in Kerala, Final Year JAVA, .NET IEEE Projects in Pondichery, Final Year JAVA, .NET IEEE Projects in Tamilnadu, Final Year JAVA, .NET IEEE Projects in Chennai, Final Year JAVA, .NET IEEE Projects in Trichy, Final Year JAVA, .NET IEEE Projects in Erode, Final Year JAVA, .NET IEEE Projects in Karur, Final Year JAVA, .NET IEEE Projects in Coimbatore, Final Year JAVA, .NET IEEE Projects in Tirunelveli, Final Year JAVA, .NET IEEE Projects in Madurai, Final Year JAVA, .NET IEEE Projects in Salem, Final Year JAVA, .NET IEEE Projects in Tirupur, Final Year JAVA, .NET IEEE Projects in Namakkal, Final Year JAVA, .NET IEEE Projects in Tanjore, Final Year JAVA, .NET IEEE Projects in Coimbatore, Final Year JAVA, .NET IEEE Projects in Bangalore, Final Year JAVA, .NET IEEE Projects in Hydrabad, Final Year JAVA, .NET IEEE Projects in Kerala, Final Year IEEE MATLAB Projects, Final Year Projects, Academic Final Year Projects, Academic Final Year IEEE MATLAB Projects, Academic Final Year IEEE MATLAB Projects 2013, Academic Final Year IEEE MATLAB Projects 2014, IEEE MATLAB Projects, 2013 IEEE MATLAB Projects, 2013 IEEE MATLAB Projects in Chennai, 2013 IEEE MATLAB Projects in Trichy, 2013 IEEE MATLAB Projects in Karur, 2013 IEEE MATLAB Projects in Erode, 2013 IEEE MATLAB Projects in Madurai, 2013 IEEE MATLAB
A Computational Analysis of ECC Based Novel Authentication Scheme in VANET IJECEIAES
A recent development in the adhoc network is a vehicular network called VANET (Vehicular Adhoc Network). Intelligent Transportation System is the Intelligent application of VANET. Due to open nature of VANET attacker can launch various kind of attack. As VANET messages are deal with very crucial information’s which may save the life of passengers by avoiding accidents, save the time of people on a trip, exchange of secret information etc., because of this security is must be in the VANET. To ensure the highest level of security the network should be free from attackers, there by all information pass among nodes in the network must be reliable i.e. should be originated by an authenticated node. Authentication is the first line of security in VANET; it avoids nonregistered vehicle in the network. Previous research come up with some Cryptographic, Trust based, Id based, Group signature based authentication schemes. A speed of authentication and privacy preservation is important parameters in VANET authentication. This paper addresses the computational analysis of authentication schemes based on ECC. We started analysis from comparing plain ECC with our proposed AECC (Adaptive Elliptic Curve Cryptography) and EECC (Enhanced Elliptic Curve Cryptography). The result of analysis shows proposed schemes improve speed and security of authentication. In AECC key size is adaptive i.e. different sizes of keys are generated during key generation phase. Three ranges are specified for key sizes small, large and medium. In EECC we added an extra parameter during transmission of information from the vehicle to RSU for key generation. Schemes of authentications are evaluated by comparative analysis of time required for authentication and key breaking possibilities of keys used in authentication.
This document proposes a Tiered Authentication scheme called TAM for multicast traffic in ad-hoc networks. TAM exploits network clustering to reduce overhead and ensure scalability. Within a cluster, one-way hash chains authenticate message sources by appending an authentication code to messages. Between clusters, messages include multiple authentication codes based on different keys from the source to authenticate it. TAM aims to securely deliver multicast traffic while addressing challenges like resource constraints and packet loss in ad-hoc networks.
Improving Network Security in MANETS using IEEACKijsrd.com
This document discusses improving network security in mobile ad hoc networks (MANETs) using an improved version of the EAACK intrusion detection system called IEEACK. IEEACK aims to address some of the weaknesses of EAACK related to link breakage, malicious sources, and partial packet dropping. The document describes the components of IEEACK, including ACK, S-ACK, MRA, digital signatures, and a new trust-based quality of service model. Simulation results show that IEEACK can prevent attacks from malicious nodes and improve security performance metrics like packet delivery ratio and detection of malicious nodes.
OPTIMIZED ROUTING AND DENIAL OF SERVICE FOR ROBUST TRANSMISSION IN WIRELESS N...IRJET Journal
This document proposes a system to optimize routing and prevent denial of service attacks in wireless networks. It aims to detect distributed denial of service (DDoS) attacks using a classifier system called CS_DDoS that classifies packets as malicious or normal. Malicious packets will be blocked and their IP addresses blacklisted. It also aims to use a hybrid optimization system (HOS) for efficient, quality routing to increase network lifetime and user communication. The system is designed to differentiate between genuine and malicious traffic, transfer data via alternative paths if attacks are detected, and balance network load for stable data transfer while improving packet delivery and throughput.
Region Based Time Varying Addressing Scheme For Improved Mitigating Various N...theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Enabling Cloud Storage Auditing with Key Exposure ResistanceIRJET Journal
This document proposes a method for enabling cloud storage auditing with key exposure resistance. It discusses challenges with existing cloud storage auditing systems, where exposure of a client's secret auditing key would compromise the integrity of the audited data. The proposed system uses a binary tree structure and pre-order traversal to periodically update the client's secret keys in a forward-secure manner. This allows the client to audit the integrity of past cloud data, even if the current secret key is exposed. The system aims to efficiently achieve key exposure resilience while maintaining security and performance.
IRJET- Secure Database Management and Privacy Preserving in Cloud ServerIRJET Journal
This document discusses privacy and security concerns regarding database management and data storage in cloud servers. It proposes a solution using cryptosom generation to securely communicate with multiple cloud servers. The key points are:
1) Privacy and security are major concerns for organizations storing data in the cloud as data is no longer under their direct control.
2) The proposed method uses cryptosom generation and identity-based encryption to hide a user's identity and assign encryption keys, improving security of communications with cloud servers.
3) A proxy re-encryption scheme is also described that allows third parties to modify an encrypted ciphertext for one user so that it can be decrypted by another user, without revealing the secret value.
Secured Authorized Data Using Hybrid Encryption in Cloud ComputingIJERA Editor
In today’s world to provide a security to a public network like a cloud network is become a toughest task however more likely to reduce the cost at the time of providing security using cryptographic technique to delegate the mask of the decryption task to the cloud servers to reduce the computing cost. As a result, attributebased encryption with delegation emerges. Still, there are caveats and questions remaining in the previous relevant works. For to solution to all problems the cloud servers could tamper or replace the delegated cipher text and respond a forged computing result with malicious intent. They may also cheat the eligible users by responding them that they are ineligible for the purpose of cost saving. Furthermore, during the encryption, the access policies may not be flexible enough as well. Since policy for general circuits enables to achieve the strongest form of access control, a construction for realizing circuit cipher text-policy attribute-based hybrid encryption with verifiable delegation has been considered in our work. In such a system, combined with verifiable computation and encrypt-then-mac mechanism, the data confidentiality, the fine-grained access control and the correctness of the delegated computing results are well guaranteed at the same time. Besides, our scheme achieves security against chosen-plaintext attacks under the k-multilinear Decisional Diffie-Hellman assumption. Moreover, an extensive simulation campaign confirms the feasibility and efficiency of the proposed solution. There are two complementary forms of attribute-based encryption. One is key-policy attribute-based encryption (KP-ABE) [8], [9], [10], and the other is cipher text-policy attribute-based encryption. In a KP-ABE system, the decision of access policy is made by the key distributor instead of the enciphered, which limits the practicability and usability for the system in practical applicationsthe access policy for general circuits could be regarded as the strongest form of the policy expression that circuits can express any program of fixed running time
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has presented using Petrinet production model. We present the designed SCTP petrinet models and its analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization, security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
Secure cloud transmission protocol (SCTP) was proposed to achieve strong authentication and secure
channel in cloud computing paradigm at preceding work. SCTP proposed with its own techniques to attain
a cloud security. SCTP was proposed to design multilevel authentication technique with multidimensional
password generations System to achieve strong authentication. SCTP was projected to develop multilevel
cryptography technique to attain secure channel. SCTP was proposed to blueprint usage profile based
intruder detection and prevention system to resist against intruder attacks. SCTP designed, developed and
analyzed using protocol engineering phases. Proposed SCTP and its techniques complete design has
presented using Petrinet production model. We present the designed SCTP petrinet models and its
analysis. We discussed the SCTP design and its performance to achieve strong authentication, secure
channel and intruder prevention. SCTP designed to use in any cloud applications. It can authorize,
authenticates, secure channel and prevent intruder during the cloud transaction. SCTP designed to protect
against different attack mentioned in literature. This paper depicts the SCTP performance analysis report
which compares with existing techniques that are proposed to achieve authentication, authorization,
security and intruder prevention.
This document presents an improved secure cloud transmission protocol (SCTP) that was designed to achieve strong authentication, secure channels, and intruder detection in cloud computing. SCTP uses multi-level authentication with multidimensional password generation, multi-level cryptography, and usage profile-based intruder detection. SCTP was modeled using Petri net production models to analyze its design and performance. The analysis shows that SCTP outperforms existing techniques in authentication, authorization, security, and intruder prevention for cloud applications requiring high security. However, SCTP may introduce unnecessary complexity for simpler cloud applications.
IRJET- A Novel and Secure Approach to Control and Access Data in Cloud St...IRJET Journal
This document proposes a novel approach to securely control and access data stored in the cloud using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). The approach aims to address abuse of access credentials by tracing malicious insiders and revoking their access. It presents two new CP-ABE frameworks that allow traceability of malicious cloud clients, identification of misbehaving authorities, and auditing without requiring extensive storage. The frameworks provide fine-grained access control and can revoke credentials of traced attackers.
Similar to CACMAN COMPARISION WITH MOCA USING PKI ON MANET. (20)
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN
COMPUTER ENGINEERING
ISSN: 0975 – 6760| NOV 12 TO OCT 13 | VOLUME – 02, ISSUE – 02 Page 380
CACMAN COMPARISION WITH MOCA USING PKI
ON MANET.
1
MR. NIRAV N. KUBAVAT,2
PROF. S.M.MANIAR
1
M.E. [Computer Engg] Student, Department Of Computer Engineering, V.V.P.
Engineering College, Rajkot, Gujarat
2
Asst.Professor, Department Of Computer Engineering, V.V.P. Engineering College,
Rajkot, Gujarat
neeravkubavat@gmail.com , sweetymaniar@yahoo.com
ABSTRACT: MANET applications and services pose many interesting challenges due to their unique
features. Specifically, security is getting a lot of attention in every aspect of MANETs due to their
inherent vulnerability to attacks. Threats exist in every layer of the MANET stack, and different solutions
have been adapted for each security problem. Another problem for MANETs is availability, and
adding more resources will not necessarily make the system more available. Certificate Authority (CA) is
one of the most important entities in Public Key Infrastructure (PKI) and needs to be designed carefully
when adapted to MANETs. The main goal of our work is to provide a framework that addresses the
issues of performance and security of CA in MANETs. Additionally, we would like to increase the availability
of CA services, while lowering packet overhead of the network, without increasing the network vulnerability.
In this paper, we present a framework suitable for exchanging PKI certificates in MANETs. By caching and
exchanging certificates between clients collaboratively, we will show that our system can meet the performance
challenges of providing CA service without sacrificing system security.
Keywords— PKI, Key Management, Security, MANET, Ad Hoc Networks, Threshold Cryptography.
I: INTRODUCTION
MANETs are raising many interesting challenges
for applications and services due to their unique
characteristics. Not relying on fixed infrastructure
combined with tight constraints, such as power
consumption, transmission range, and possibly
computational capabilities are examples of such
challenges. On the other hand, security is as crucial
as other challenges and has received much attention
from researchers in every aspect of MANETs due to
their inherent vulnerability to wider security attacks
compared to wired networks.
PKI is an important service that provides a
security framework to the system using certificates
and public key encryption/decryption techniques.
Typically, a PKI system consists of certificates, CAs,
a revocation mechanism, and a mechanism to
evaluate a chain of certificates to the target [5].
Certificates are used for encrypting or signing in
many vital applications, such as authentication,
exchange of routing info, encrypting or signing email,
and much more. A CA is usually used to organize,
store, and issue those certificates. Adopting PKI
in MANETs is not an easy task, since PKI is mostly
designed for centralized, wired, and well-
connected networks. The introduction of MANETs
makes the task of providing a reliable and secure
service much more difficult.
Recently, researchers have identified these
constraints and tried to provide some solutions to
adapt CA for MANETs (e.g. [1],[6], and [7]).
These solutions rely on secret sharing mechanisms
[8][9][10] to increase security and availability, since
installing the CA service in just one node will
make it vulnerable and exact replication of the CA
will make the situation even worse[6]. Despite the
fact that secret sharing seems to be a natural fit for
MANET, it cannot be deployed without
consequences that will be discussed shortly.
The aim of this work is to minimize the burden
of adapting CA services in MANETs by
minimizing the packet overhead and maintaining high
availability of the service at the same time. This goal
has been achieved by allowing clients to share some
responsibilities with CA servers by cooperatively
caching a portion of the certificates generated by CA
servers. The characteristics of certificates can make
caching a reasonable solution to the availability
problem. Our cashing-based framework will address
security and performance challenges of providing
CA services in MANETs. In addition, it suggests
techniques with minimal overhead that can help
enhance our main goal, availability, without
compromising the security of the network. We
will show the feasibility of our framework and
compare it to related work that address the same
problem.
2. JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN
COMPUTER ENGINEERING
ISSN: 0975 – 6760| NOV 12 TO OCT 13 | VOLUME – 02, ISSUE – 02 Page 381
II: RELATED WORK
Mobile CA (MOCA)[6] has been introduced as the
matching part of the Cornell Online Certificate
Authority (COCA) [12] framework, but with the
consideration of MANETs' unique properties, since
the latter was originally designed for wired networks
and did not consider the connectivity of clients.
MOCA is a mobile node within an ad-hoc network
selected to provide distributed CA functionality.
MOCA nodes apply threshold cryptography to
share the responsibility and increase availability of
the ad-hoc network. When a client wants to obtain a
certificate, it sends Certificate Request (CREQ)
packets to at least t MOCA servers and waits for a
reply. Any MOCA that receives a CREQ will reply
by sending a Certification Reply (CREP) packet
containing its partial signature. Once the client
receives t valid CREPs, it can reconstruct the
whole certificate. CREQ and CREP have been
embedded in Route Request (RREQ) and Route
Reply (RREP) messages that are found in on-
demand ad-hoc routing protocols like AODV [13]
and DSR [14] to reduce the amount of overhead
packets. MOCA increases availability by letting
clients send β unicast CREQ, where β = t + α and α
is determined in each client by inspecting the
status of the network. Additionally, to avoid
flooding the network, MOCA clients inspect the
route table to see if any cached routes to β
MOCAs are available, making flooding a last resort
when the cached routes are less than β . On the other
hand, Kong et al.’s [15] goal is to provide pervasive
CA services by making all n nodes in the network
share CAs’ functionality. However, we will not
compare the work due to the same reasons mentioned
in [6].
Schemes
Metrics
Flooding unicast
Packet Overhead High Low
Response Time Long Quick
Success ratio High Medium
No. of CREP Received Greater Medium
High CA Threshold
Value
Best Worst
Low CA Threshold
Value
Average Best
Medium CA Threshold
Valure
Average Average
Table 1 : Comparison of Flooding and Unicast for
CA
III: CACMAN
We showed in [7] two concerns about MOCA,
which we believe may reduce the usability of the
framework. The first concern was that the number of
MOCA nodes is relatively high, about 10 to 20% of
the total number of participants. We believe that
making n that large does not come without some
consequences (e.g. overhead of CA synchronization
and key refreshing). The second concern was that the
ratio of control packets generated by MOCA, the sum
of CREQ and CREP, compared to flooding is high. In
the best case, MOCA saved about 30% of packet
overhead when 5=ß and only about 5% when 25=ß
(ß=t and 30=n ) [6]. The main idea of CACMAN is to
make clients play a more active role in CA services
by giving them some responsibilities, namely,
caching valid certificates for their usage and giving
them to other clients when necessary. This will
reduce the burden on CAs and reduce the need of
adding more replicas to increase availability, since
doing that, with fixed threshold, will make the
system more exposed and its consistencydifficult to
maintain. In addition, CACMAN increases the
availability and efficiency of the system without any
additional CA servers by caching combined and
partially signed certificates in each client’s local
memory. CACMAN still needs CA servers to
generate new certificates and revoke them. However,
their number, when CACMAN is used, will not play
a significant role for in system availability.
The NS-2 simulator gives us more insight into
the problem from the performance prospective. Here,
we will show the basic revised CACMAN model,
which differs slightly from the one introduced in [7].
When a client wants to obtain other client(s)'
certificates to establish a secure communication or to
encrypt some messages, it will perform the
following steps:
1- It checks its own cache to find out the
availability of a complete certificate, or it
will identify the missing parts if partially
signed certificates are found.
2- If there is no sufficient number of
partially signed certificates, then it
makes a local broadcast. However, if
it happens that the source has a short
route to the subject in question; it
should favor sending the CREQ to that
subject instead of making a local
broadcast1.
3- Every client that receives a CREQ
message inspects its cache for a possible
hit. It sends a CREP back to the sender if
a full or partial certificate(s) is found for
the subject. We have also tried another
3. JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN
COMPUTER ENGINEERING
ISSN: 0975 – 6760| NOV 12 TO OCT 13 | VOLUME – 02, ISSUE – 02 Page 382
variation for this step named 'CACMAN-X
replies'. The difference is that if no full
certificate has been found, then X
randomly selected shares will be sent. The
goal of this is to increase the chance
of delivering more desired partials. In
this paper, we tried X=2, 4, 8, and All.
4- The request initiator waits for a specified
time; if it receives sufficient responses
then it reconstructs the certificate. Notice
that the client could receive a complete
certificate, which eliminates the
reconstruction step but not the verification.
If the request initiator times out
because insufficient partials have been
received or the reconstruction has failed,
then the client may try to contact CA
servers by flooding the network or by
sending unicast messages if it has
sufficient routes to them. Otherwise, it
reports a failure to the client or retries later.
Schemes
Metrics
CACMAN MOCA
Packet Overhead Low High
Response Time Quick Long
Success ratio High Medium
No. of CREP
Received
High Medium
Flooding Ratio Very Low Very High
Certificate
Availability
Easy Difficult
Security High Low
Caching Certificate Yes No
Table 2 : Comparison of CACMAN and MOCA
IV: Simulation and Discussion
In our simulations, we have used the most
recent build of NS-2 to simulate the following
hypothetical scenario similar to the one used in
[6]: 150 mobile nodes scattered in a 1000 m2 field,
30 of which are CA servers2 to the remaining 120
clients’ nodes. All nodes are moving using
the CMU random waypoint model[16]. One
hundred nodes will make 10 CREQs during the
600 second simulation time. We have
generated nodes mobility scenarios for
maximum speeds of 0, 1, 5, 10, and 20 m/sec with
pause times of 0 and 10 seconds. Each
configuration is replicated at least three times. The
reason for choosing this configuration is to
compare our work with the MOCA framework. In
addition, we have used four cache sizes- 75, 150,
225, and 300 share slots - and four
thresholds for key construction - 5, 15, 20, and 25.
For the purpose of comparing our work with MOCA,
their figures are shown in Figure 1 and Figure 4(a),
(b).
Total Number of Mobile
Nodes
150
Number of MOCAs 30
Area of Network 1000m x 1000m
Total Simulation Time 600 seconds
Number of Certification
Requests
10 requests each from
100 non-MOCAs
Node Pause Time 0, 10 seconds
Node Max. Speed 0, 1, 5, 10, 20 ms
Cache Sizes 75,150,225,300
Table 3: Simulation Parameters
Figure 1(a) shows the number of received
CREPs for pure flooding certification protocol. It
is effective; however, it injects a large number of
control packets into the network. Figure 1(b) shows
the effect of introducing the MOCA protocol. The
peaks around different betas are caused by
MOCA when it uses cached route information to
MOCA servers. As we can see, the occurrences are
still shifted to the right, which indicates a high rate
of success, but many redundant replies are also
received.
Figure 1. (a)Flooding based certification protocol
(b) MOCA certification protocol
On the other hand, Figure 2 and Figure 3 show the
basic CACMAN protocol. Each figure shows the
effect of mobility and cache sizes when we fix the
threshold. The legends in these figures are
interpreted as 'number of nodes-pause time-max
speed (cache capacity) protocol'. In Figure 2, when
T=5, very few requests did not get sufficient replies,
and the majority received an overwhelming number
of replies. When we increased the threshold to
15, as in Figure 3, the number of requests that
did not get sufficient replies increased and the cache
size played a more important role.
4. JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN
COMPUTER ENGINEERING
ISSN: 0975 – 6760| NOV 12 TO OCT 13 | VOLUME – 02, ISSUE – 02 Page 383
Notice that when the cache capacity
increases to 150 and 225, as in Figure 2 (b and
c)respectively, the situation is much better as the
number of unsuccessful requests decreases.
Due to lack of space, we will consider only
the case where T=15 with max speed of 10 m/s
and 0 pause time as a typical case.
Figure 2.(a-c) Mobility effect with different cache
sizes.(d)Cache capacity effect (T=5).
Figure 3.(a-c) Mobility effect with different cache
sizes.(d)Cache capacity effect (T=15).
From the above figures we can observe the following
points for CACMAN.
1- The effect Of the threshold is noticeable,
and the more it increases, the less the
likelihood of getting all partials from the
first attempt.
2- The peaks around the thresholds are due to
the following reasons:
a. The requester received or already
has a complete certificate (not
partial), so it will be counted as if it
has received or had T shares.
b. The requester has, before sending
the CREQ, some partials in its
cache but is missing few. When it
sends a CREQ, many of the replies
are already cached, but the missed
parts eventually arrive, and this
will be reflected as a peak around
the threshold since we do not count
duplicate partials.
3. The mobility effect is not that
significant since CACMAN utilizes
local broadcasts, and mobility will
have a significant effect when
routing information is involved in the
protocol.
4- Cache size obviously has an impact, as
shown in (d) of Figure 2 and Figure 3.
However, going from 75 cache size to
150 is more significant and effective than
going from 150 to 225, and this is
consistent in other thresholds and max
speeds, which suggests 150 as the best
deal for the cache size and hit ratio
tradeoffs.
Figure 4. Success Ratio with (a) Flooding (b) MOCA
Closest-Unicast (c) CACMAN basic.
Figure 4 shows the success ratio for various
scenarios4
. We used the same technique used in [6]
to plot the CACMAN success ratio. By
observing this figure, flooding is indeed the most
effective way for obtaining a high success ratio,
but with the price of packet overhead.
CACMAN and MOCA perform similarly to
each other for threshold 5 and 15. The cache
size has a noticeable effect when T=15, and the
use of X-replies optimization tightens the gap
between cache sizes. By observing the time scale,
CACMAN stabilizes much faster than flooding and
MOCA due to the fact that some partials are
available at the local cache of the requester.
5. JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN
COMPUTER ENGINEERING
ISSN: 0975 – 6760| NOV 12 TO OCT 13 | VOLUME – 02, ISSUE – 02 Page 384
Besides, CACMAN requests never propagate more
than one hop.
V: Conclusions
In this paper, we have presented CACMAN, a
framework for enhancing CA service in MANETs,
and shown how cooperative certificate caching
would play a pivotal role in decreasing the
overhead of offering certification service, relieving
CA servers, and still maintaining high availability.
In addition, we showed one effective stateless
way of accessing certificates using local
broadcasting.
We are interested in finding some
applications other than PKI that could benefit
from the introduction of client caching in a one-to-
many-to-one communication paradigm. Currently,
we are fine-tuning enhancements suggested in
section 5 in our simulation environment in order to
see their effect on the performance of
CACMAN. A future direction is to investigate
other types of networks, such as hybrid and
wired Internet peer-to-peer networks, and see
how effective it is to deploy CACMAN in these
environments.
References
[1] L. Zhou and Z.J. Haas, "Securing Ad-hoc
Networks",IEEE Network Magazine, Nov. 1999.
[2] L. Bononi et al., "A Differentiated
Distributed Coordination Function MAC Protocol
for Cluster-Based Wireless Ad-Hoc Networks", Proc.
1st ACM int’l Workshop on Performance Evaluation
of Wireless Ad-Hoc, Sensor, and Ubiquitous Networks,
Venezia, Italy, Oct. 2004, pp. 77 - 86.
[3] M.C. Domingo and D. Remondo, "An Interaction
Model between Ad-Hoc Networks and Fixed IP
Networks for QoS Support", Proc. 7th ACM int’l
Symp. on Modeling, Analysis and Simulation of
Wireless and Mobile Systems, Venice, Italy, Oct.
2004, pp. 188 – 194.
[4] W.H.O. Lau, M. Kumar, and S. Venkatesh,
"Mobile Ad Hoc Networks: A Cooperative Cache
Architecture in Support of Caching Multimedia
Objects in MANETs", Proc. 5th
ACM int’l
Workshop on Wireless Mobile Multimedia,
Sept.2002.
[5] C. Kaufman et al., Network Security:
Private Communication in a Public World, Prentice
Hall, 2nd ed.,2002.
[6] S. Yi and R. Kravets, "MOCA: Mobile
Certificate Authority for Wireless Ad-hoc Networks",
2nd Annual PKI Research Workshop (PKI03), Apr.
2003.
[7] L. Al-Sulaiman and H. Abdel-Wahab,
"Cooperative Caching Techniques for Increasing
the Availability of MANET Certificate Authority
Services", The 3rd
ACS/IEEE Int’l Conf. Computer
Systems and Applications (AICCSA- 05), Cairo,
Egypt, Jan. 2005.
[8] Y. Desmedt, "Society and Group Oriented
Cryptography: A New Concept", In C.
Pomerance, ed., Advances in Cryptology Crypto
'87 Proceedings, no. 293, LNCS, Santa Barbara,
CA, Springer-Verlag, 1988, pp. 120-127.
[9] S. Jarecki, Proactive Secret Sharing and
Public Key Cryptosystems, Master Thesis, MIT, 1995.
[10] Shamir, "How to Share a Secret",
Communications of the ACM, vol. 22, no. 11, Nov.
1979, pp. 612-613.
[11] S. McCanne and S. Floyd, The LBNL Network
Simulator (NS-2); http://www.isi.edu/nsnam/ns/.
[12] L. Zhou, F. Schneider, and R. van Renesse.
"COCA: A Secure Distributed On-line Certification
Authority", ACM Trans. on Computer Systems, vol.
20, no. 4, Nov. 2002, pp.329-368.
[13] E. Perkins and E.M. Royer, "Ad-Hoc On-
Demand Distance Vector Routing", In The 2nd IEEE
Workshop on Mobile Computing Systems and
Applications, New Orleans, LA, Feb. 1999, pp. 90-
100.
[14] J. Broch and D.B. Johnson, "The Dynamic
Source Routing Protocol for Mobile Ad-Hoc
Networks", IETF Internet Draft, Feb. 2003.
[15] J. Kong et al., "Providing Robust and
Ubiquitous Security Support for Mobile Ad-Hoc
Networks", In Proc. ICNP ’01, 2001, pp. 251-260.
[16] J. Broch et al., "A Performance Comparison of
Multi- Hop Wireless Ad-Hoc Network Routing
Protocols", Proc. 4th annual ACM/IEEE int’l conf.
Mobile Computing and Networking, Dallas, TX, Oct.
1998, pp. 85 – 97.
[17] E. Pagani and G.P. Rossi, "A Framework
for the Admission Control of QoS Multicast Traffic
in Mobile Ad- Hoc Networks", Proc. 4th ACM int’l
workshop on Wireless Mobile Multimedia, July 2001,
pp. 2-11.