Several solutions exist for file storage, sharing, and synchronization. Many of them involve a
central server, or a collection of servers, that either store the files, or act as a gateway for them
to be shared. Some systems take a decentralized approach, wherein interconnected users form a
peer-to-peer (P2P) network, and partake in the sharing process: they share the files they
possess with others, and can obtain the files owned by other peers.
In this paper, we survey various technologies, both cloud-based and P2P-based, that users use
to synchronize their files across the network, and discuss their strengths and weaknesses.
Best Hadoop Institutes : kelly tecnologies is the best Hadoop training Institute in Bangalore.Providing hadoop courses by realtime faculty in Bangalore.
Available techniques in hadoop small file issueIJECEIAES
Hadoop is an optimal solution for big data processing and storing since being released in the late of 2006, hadoop data processing stands on master-slaves manner that’s splits the large file job into several small files in order to process them separately, this technique was adopted instead of pushing one large file into a costly super machine to insights some useful information. Hadoop runs very good with large file of big data, but when it comes to big data in small files it could facing some problems in performance, processing slow down, data access delay, high latency and up to a completely cluster shutting down. In this paper we will high light on one of hadoop’s limitations, that’s affects the data processing performance, one of these limits called “big data in small files” accrued when a massive number of small files pushed into a hadoop cluster which will rides the cluster to shut down totally. This paper also high light on some native and proposed solutions for big data in small files, how do they work to reduce the negative effects on hadoop cluster, and add extra performance on storing and accessing mechanism.
Best Hadoop Institutes : kelly tecnologies is the best Hadoop training Institute in Bangalore.Providing hadoop courses by realtime faculty in Bangalore.
Available techniques in hadoop small file issueIJECEIAES
Hadoop is an optimal solution for big data processing and storing since being released in the late of 2006, hadoop data processing stands on master-slaves manner that’s splits the large file job into several small files in order to process them separately, this technique was adopted instead of pushing one large file into a costly super machine to insights some useful information. Hadoop runs very good with large file of big data, but when it comes to big data in small files it could facing some problems in performance, processing slow down, data access delay, high latency and up to a completely cluster shutting down. In this paper we will high light on one of hadoop’s limitations, that’s affects the data processing performance, one of these limits called “big data in small files” accrued when a massive number of small files pushed into a hadoop cluster which will rides the cluster to shut down totally. This paper also high light on some native and proposed solutions for big data in small files, how do they work to reduce the negative effects on hadoop cluster, and add extra performance on storing and accessing mechanism.
Load Rebalancing with Security for Hadoop File System in CloudIJERD Editor
[1]A file system is used for the organization, storage,[1]retrieval, naming, sharing, and protection of
files. Distributed file system has certain degrees of transparency to the user and the system such as access
transparency,[2] location transparency, failure transparency, heterogeneity, replication transparency etc.
[1][3]NFS (Network File System), RFS (Remote File Sharing), Andrew File System (AFS) are examples of
Distributed file system. Distributed file systems are generally used for cloud computing applications based on
[4] the MapReduce programming model. A MapReduce program consist of a Map () procedure that performs
filtering and a Reduce () procedure that performs a summary operation. However, in a cloud computing
environment, sometimes failure is occurs and nodes may be upgraded, replaced, and added in the system.
Therefore load imbalanced problem arises. To solve this problem, load rebalancing algorithm is implemented in
this paper so that central node should not overloaded. The implementation is done in hadoop distributed file
system. As apache hadoop is used, security issues are arises. To solve these security issues and to increase
security, [20] Kerberos authentication protocol is implemented to handle multiple nodes. This paper shows real
time implementation experiment on cluster with result.
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
A self destruction system for dynamic group data sharing in cloudeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Integrity Privacy to Public Auditing for Shared Data in Cloud ComputingIJERA Editor
In cloud computing, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing . In these mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a random combination of all the blocks instead of the whole data is retrieved during integrity checking .However, public auditing for such shared data— while preserving identity privacy — remains to be an open challenge. Here, we only consider how to audit the integrity of shared data in the cloud with static groups. It means the group is pre-defined before shared data is created in the cloud and the membership of users in the group is not changed during data sharing. The original user is responsible for deciding who is able to share her data before outsourcing data to the cloud. Another interesting problem is how to audit the integrity of shared data in the cloud with dynamic groups — a new user can be added into the group and an existing group member can be revoked during data sharing.
Cloud Data De Duplication in Multiuser Environment DeposM2ijtsrd
Nowadays, cloud computing produce a huge amount of sensitive data, such as personal Information, financial data, and electronic health records, social media data. And that causes duplication of data and that suffers to storage and performance of cloud system. Data De Duplication has been widely used to eliminate redundant storage overhead in cloud storage system to improve IT resources efficiency. However, traditional techniques face a great challenge in big data De Duplication to strike a sensible tradeoff between the conflicting goals of scalable De Duplication throughput and high duplicate elimination ratio. De Duplication reduces the space and bandwidth requirements of data storage services, and is most effective when applied across multiple users, a common practice by cloud storage offerings. I study the privacy implications of cross user De Duplication. Thus, an interesting challenging problem is how to deduplicate multimedia data with a multi user environment and propose an efficient system to overcome these types of problems. In this paper, I introduce a new primitive called Depos M2 which gives a partial positive answer for these challenging problem. I propose two phases De Duplication and proof of storage, where the first one allows De Duplication of data and letter one allows proof of storage that means give permission to respective user i.e. owner of that file. Mr. Kaustubh Borate | Prof. Bharti Dhote "Cloud Data De-Duplication in Multiuser Environment: DeposM2" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25270.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25270/cloud-data-de-duplication-in-multiuser-environment-deposm2/mr-kaustubh-borate
BFC: High-Performance Distributed Big-File Cloud Storage Based On Key-Value S...dbpublications
Nowadays, cloud-based storage services are rapidly growing and becoming an emerging trend in data storage field. There are many problems when designing an efficient storage engine for cloud-based systems with some requirements such as big-file processing, lightweight meta-data, low latency, parallel I/O, Deduplication, distributed, high scalability. Key-value stores played an important role and showed many advantages when solving those problems. This paper presents about Big File Cloud (BFC) with its algorithms and architecture to handle most of problems in a big-file cloud storage system based on key value store. It is done by proposing low-complicated, fixed-size meta-data design, which supports fast and highly-concurrent, distributed file I/O, several algorithms for resumable upload, download and simple data Deduplication method for static data. This research applied the advantages of ZDB - an in-house key value store which was optimized with auto-increment integer keys for solving big-file storage problems efficiently. The results can be used for building scalable distributed data cloud storage that support big-file with size up to several terabytes.
Secure Deduplication with Efficient and Reliable Dekey Management with the Pr...paperpublications3
Abstract: De-Duplication improves Storage and bandwidth efficiency is incompatible with traditional encryption. In traditional model encryption requires different users to encrypt their own data with their own master key, thus identical data copies of different users will lead to different cipher texts, making de-duplication impossible. Each such copy can be defined based on different granularities: it may refer to either a whole file (i.e., file level deduplication), or data block (i.e., block-level deduplication). To applying deduplication to user data to save maintenance cost in cloud. Apart from normal encryption and decryption process we have proposed Master key concept with DeKey concept. For Encryption and Decryption we have used Triple Data Encryption Standard Algorithm where the plain text is encrypted triple times with the key so that the data is secure and reliable from hackers. We reduced the cost and time in uploading and downloading with storage space.Keywords: De-Duplication improves Storage; bandwidth efficiency is incompatible with traditional encryption.
Title: Secure Deduplication with Efficient and Reliable Dekey Management with the Proof of Ownership
Author: M. Shankari, V. Sheela, S. Rajesh
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Secure Distributed Deduplication Systems with Improved Reliability1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Ijaems apr-2016-7 An Enhanced Multi-layered Cryptosystem Based Secure and Aut...INFOGAIN PUBLICATION
Data de-duplication is one of the essential data compression techniques for eliminating duplicate copies of repeating data, and it has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the privacy of sensitive data while supporting de-duplication, the salt encryption technique has been proposed to encrypt the data before its outsourcing. To protect the data security in a better way, this paper makes the first attempt to formally address the problem of authorized data de-duplication. Different from traditional de-duplication systems, the derivative privileges of users are further considered in duplicate check besides the data itself. We also present various new de-duplication constructions which supports the authorized duplicate check in hybrid cloud architecture. Security analysis demonstrates that the scheme which we used is secure in terms of the definitions specified in the proposed security model. We enhance our system in security. Specially, we present a forward-looking scheme to support a stronger security by encrypting file with differential privilege keys. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.
Key Management Scheme for Secure Group Communication in WSN with Multiple Gr...csandit
Security is one of the inherent challenges in the area of Wireless Sensor Network (WSN). At
present, majority of the security protocols involve massive iterations and complex steps of
encryptions thereby giving rise to degradation of quality of service. Many WSN applications are
based on secure group communication. In this paper, we have proposed a scheme for secure
group key management with simultaneous multiple groups. The scheme uses a key-based
approach for managing the groups and we show that membership change events can be
handled with less storage, communication and computation cost. The scheme also offers
authentication to the messages communicated within and among the groups.
Exploring The Dynamic Integration of Heterogeneous Services csandit
The increase need for services to handle a plethora of business needs within the enterprise
landscape has yielded to an increase in the development of heterogeneous services across the
digital world. In today’s digital economy, services are the key components for communication
and collaboration amongst enterprises internally and externally. Since Internet has stimulated
the use of services, different services have been developed for different purposes prompting
those services to be heterogeneous due to incompatibles approaches relied upon at both
conceptual and exploitation phases. The proliferation of developed heterogeneous services in
the digital world therefore comes along with a range of challenges more precisely in the
integration layer. Traditionally, integration is achieved by using gateways, which require
considerable configuration effort. Many approaches and frameworks have been developed by
different researchers to overcome these challenges, but up to date the challenges of integration
heterogeneous services with minimal user-involvement still exist. In this paper, we are exploring
the challenges of heterogeneous services and characteristics thereof with the aim of developing
a seamless approach that will alleviate some of these challenges in near future. It is therefore of
outmost importance to understand the challenges and characteristics of heterogeneous services
before developing a mechanism that could eliminate these challenges.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
WIRELESS SENSORS INTEGRATION INTO INTERNET OF THINGS AND THE SECURITY PRIMITIVEScsandit
The common vision of smart systems today, is by and large associated with one single concept,
the internet of things (IoT), where the whole physical infrastructure is linked with intelligent
monitoring and communication technologies through the use of wireless sensors. In such an
intelligent vibrant system, sensors are connected to send useful information and control
instructions via distributed sensor networks. Wireless sensors have an easy deployment and
better flexibility of devices contrary to wired setup. With the rapid technological development of
sensors, wireless sensor networks (WSNs) will become the key technology for IoT and an
invaluable resource for realizing the vision of Internet of things (IoT) paradigm. It is also
important to consider whether the sensors of a WSN should be completely integrated into IoT or
not. New security challenges arise when heterogeneous sensors are integrated into the IoT. Security needs to be considered at a global perspective, not just at a local scale. This paper gives an overview of sensor integration into IoT, some major security challenges and also a
number of security primitives that can be taken to protect their data over the internet.
Basic Evaluation of Antennas Used in Microwave Imaging for Breast Cancer Dete...csandit
Microwave imaging is one of the most promising techniques in diagnosis and screening of
breast cancer and in the medical field that currently under development. It is nonionizing,
noninvasive, sensitive to tumors, specific to cancers, and low-cost. Microwave measurements
can be carried out either in frequency domain or in time domain. In order to develop a
clinically viable medical imaging system, it is important to understand the characteristics of the
microwave antenna. In this paper we investigate some antenna characteristics and discuss
limitations of existing and proposed systems.
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
When a software system evolves, new requirements may be added, existing functionalities
modified, or some structural change introduced. During such evolution, disorder may be
introduced, complexity increased or unintended consequences introduced, producing rippleeffect
across the system. JHotDraw (JHD), a well-tested and widely used open source Javabased
graphics framework developed with the best software engineering practice was selected
as a test suite. Six versions were profiled and data collected dynamically, from which two metrics were derived namely entropy and software maturity index. These metrics were used to investigate degradation as the software transitions from one version to another. This study observed that entropy tends to decrease as the software evolves. It was also found that a
software product attains its lowest decrease in entropy at the turning point where its highest maturity index is attained, implying a possible correlation between the point of lowest decreasein entropy and software maturity index.
Load Rebalancing with Security for Hadoop File System in CloudIJERD Editor
[1]A file system is used for the organization, storage,[1]retrieval, naming, sharing, and protection of
files. Distributed file system has certain degrees of transparency to the user and the system such as access
transparency,[2] location transparency, failure transparency, heterogeneity, replication transparency etc.
[1][3]NFS (Network File System), RFS (Remote File Sharing), Andrew File System (AFS) are examples of
Distributed file system. Distributed file systems are generally used for cloud computing applications based on
[4] the MapReduce programming model. A MapReduce program consist of a Map () procedure that performs
filtering and a Reduce () procedure that performs a summary operation. However, in a cloud computing
environment, sometimes failure is occurs and nodes may be upgraded, replaced, and added in the system.
Therefore load imbalanced problem arises. To solve this problem, load rebalancing algorithm is implemented in
this paper so that central node should not overloaded. The implementation is done in hadoop distributed file
system. As apache hadoop is used, security issues are arises. To solve these security issues and to increase
security, [20] Kerberos authentication protocol is implemented to handle multiple nodes. This paper shows real
time implementation experiment on cluster with result.
ANALYSIS OF ATTACK TECHNIQUES ON CLOUD BASED DATA DEDUPLICATION TECHNIQUESneirew J
ABSTRACT
Data in the cloud is increasing rapidly. This huge amount of data is stored in various data centers around the world. Data deduplication allows lossless compression by removing the duplicate data. So, these data centers are able to utilize the storage efficiently by removing the redundant data. Attacks in the cloud computing infrastructure are not new, but attacks based on the deduplication feature in the cloud computing is relatively new and has made its urge nowadays. Attacks on deduplication features in the cloud environment can happen in several ways and can give away sensitive information. Though, deduplication feature facilitates efficient storage usage and bandwidth utilization, there are some drawbacks of this feature. In this paper, data deduplication features are closely examined. The behavior of data deduplication depending on its various parameters are explained and analyzed in this paper.
A self destruction system for dynamic group data sharing in cloudeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Integrity Privacy to Public Auditing for Shared Data in Cloud ComputingIJERA Editor
In cloud computing, many mechanisms have been proposed to allow not only a data owner itself but also a public verifier to efficiently perform integrity checking without downloading the entire data from the cloud, which is referred to as public auditing . In these mechanisms, data is divided into many small blocks, where each block is independently signed by the owner; and a random combination of all the blocks instead of the whole data is retrieved during integrity checking .However, public auditing for such shared data— while preserving identity privacy — remains to be an open challenge. Here, we only consider how to audit the integrity of shared data in the cloud with static groups. It means the group is pre-defined before shared data is created in the cloud and the membership of users in the group is not changed during data sharing. The original user is responsible for deciding who is able to share her data before outsourcing data to the cloud. Another interesting problem is how to audit the integrity of shared data in the cloud with dynamic groups — a new user can be added into the group and an existing group member can be revoked during data sharing.
Cloud Data De Duplication in Multiuser Environment DeposM2ijtsrd
Nowadays, cloud computing produce a huge amount of sensitive data, such as personal Information, financial data, and electronic health records, social media data. And that causes duplication of data and that suffers to storage and performance of cloud system. Data De Duplication has been widely used to eliminate redundant storage overhead in cloud storage system to improve IT resources efficiency. However, traditional techniques face a great challenge in big data De Duplication to strike a sensible tradeoff between the conflicting goals of scalable De Duplication throughput and high duplicate elimination ratio. De Duplication reduces the space and bandwidth requirements of data storage services, and is most effective when applied across multiple users, a common practice by cloud storage offerings. I study the privacy implications of cross user De Duplication. Thus, an interesting challenging problem is how to deduplicate multimedia data with a multi user environment and propose an efficient system to overcome these types of problems. In this paper, I introduce a new primitive called Depos M2 which gives a partial positive answer for these challenging problem. I propose two phases De Duplication and proof of storage, where the first one allows De Duplication of data and letter one allows proof of storage that means give permission to respective user i.e. owner of that file. Mr. Kaustubh Borate | Prof. Bharti Dhote "Cloud Data De-Duplication in Multiuser Environment: DeposM2" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25270.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25270/cloud-data-de-duplication-in-multiuser-environment-deposm2/mr-kaustubh-borate
BFC: High-Performance Distributed Big-File Cloud Storage Based On Key-Value S...dbpublications
Nowadays, cloud-based storage services are rapidly growing and becoming an emerging trend in data storage field. There are many problems when designing an efficient storage engine for cloud-based systems with some requirements such as big-file processing, lightweight meta-data, low latency, parallel I/O, Deduplication, distributed, high scalability. Key-value stores played an important role and showed many advantages when solving those problems. This paper presents about Big File Cloud (BFC) with its algorithms and architecture to handle most of problems in a big-file cloud storage system based on key value store. It is done by proposing low-complicated, fixed-size meta-data design, which supports fast and highly-concurrent, distributed file I/O, several algorithms for resumable upload, download and simple data Deduplication method for static data. This research applied the advantages of ZDB - an in-house key value store which was optimized with auto-increment integer keys for solving big-file storage problems efficiently. The results can be used for building scalable distributed data cloud storage that support big-file with size up to several terabytes.
Secure Deduplication with Efficient and Reliable Dekey Management with the Pr...paperpublications3
Abstract: De-Duplication improves Storage and bandwidth efficiency is incompatible with traditional encryption. In traditional model encryption requires different users to encrypt their own data with their own master key, thus identical data copies of different users will lead to different cipher texts, making de-duplication impossible. Each such copy can be defined based on different granularities: it may refer to either a whole file (i.e., file level deduplication), or data block (i.e., block-level deduplication). To applying deduplication to user data to save maintenance cost in cloud. Apart from normal encryption and decryption process we have proposed Master key concept with DeKey concept. For Encryption and Decryption we have used Triple Data Encryption Standard Algorithm where the plain text is encrypted triple times with the key so that the data is secure and reliable from hackers. We reduced the cost and time in uploading and downloading with storage space.Keywords: De-Duplication improves Storage; bandwidth efficiency is incompatible with traditional encryption.
Title: Secure Deduplication with Efficient and Reliable Dekey Management with the Proof of Ownership
Author: M. Shankari, V. Sheela, S. Rajesh
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Secure Distributed Deduplication Systems with Improved Reliability1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
Ijaems apr-2016-7 An Enhanced Multi-layered Cryptosystem Based Secure and Aut...INFOGAIN PUBLICATION
Data de-duplication is one of the essential data compression techniques for eliminating duplicate copies of repeating data, and it has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the privacy of sensitive data while supporting de-duplication, the salt encryption technique has been proposed to encrypt the data before its outsourcing. To protect the data security in a better way, this paper makes the first attempt to formally address the problem of authorized data de-duplication. Different from traditional de-duplication systems, the derivative privileges of users are further considered in duplicate check besides the data itself. We also present various new de-duplication constructions which supports the authorized duplicate check in hybrid cloud architecture. Security analysis demonstrates that the scheme which we used is secure in terms of the definitions specified in the proposed security model. We enhance our system in security. Specially, we present a forward-looking scheme to support a stronger security by encrypting file with differential privilege keys. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.
Key Management Scheme for Secure Group Communication in WSN with Multiple Gr...csandit
Security is one of the inherent challenges in the area of Wireless Sensor Network (WSN). At
present, majority of the security protocols involve massive iterations and complex steps of
encryptions thereby giving rise to degradation of quality of service. Many WSN applications are
based on secure group communication. In this paper, we have proposed a scheme for secure
group key management with simultaneous multiple groups. The scheme uses a key-based
approach for managing the groups and we show that membership change events can be
handled with less storage, communication and computation cost. The scheme also offers
authentication to the messages communicated within and among the groups.
Exploring The Dynamic Integration of Heterogeneous Services csandit
The increase need for services to handle a plethora of business needs within the enterprise
landscape has yielded to an increase in the development of heterogeneous services across the
digital world. In today’s digital economy, services are the key components for communication
and collaboration amongst enterprises internally and externally. Since Internet has stimulated
the use of services, different services have been developed for different purposes prompting
those services to be heterogeneous due to incompatibles approaches relied upon at both
conceptual and exploitation phases. The proliferation of developed heterogeneous services in
the digital world therefore comes along with a range of challenges more precisely in the
integration layer. Traditionally, integration is achieved by using gateways, which require
considerable configuration effort. Many approaches and frameworks have been developed by
different researchers to overcome these challenges, but up to date the challenges of integration
heterogeneous services with minimal user-involvement still exist. In this paper, we are exploring
the challenges of heterogeneous services and characteristics thereof with the aim of developing
a seamless approach that will alleviate some of these challenges in near future. It is therefore of
outmost importance to understand the challenges and characteristics of heterogeneous services
before developing a mechanism that could eliminate these challenges.
CORRELATION OF EIGENVECTOR CENTRALITY TO OTHER CENTRALITY MEASURES: RANDOM, S...csandit
In this paper, we thoroughly investigate correlations of eigenvector centrality to five centrality
measures, including degree centrality, betweenness centrality, clustering coefficient centrality,
closeness centrality, and farness centrality, of various types of network (random network, smallworld
network, and real-world network). For each network, we compute those six centrality
measures, from which the correlation coefficient is determined. Our analysis suggests that the
degree centrality and the eigenvector centrality are highly correlated, regardless of the type of
network. Furthermore, the eigenvector centrality also highly correlates to betweenness on
random and real-world networks. However, it is inconsistent on small-world network, probably
owing to its power-law distribution. Finally, it is also revealed that eigenvector centrality is
distinct from clustering coefficient centrality, closeness centrality and farness centrality in all
tested occasions. The findings in this paper could lead us to further correlation analysis on
multiple centrality measures in the near future
WIRELESS SENSORS INTEGRATION INTO INTERNET OF THINGS AND THE SECURITY PRIMITIVEScsandit
The common vision of smart systems today, is by and large associated with one single concept,
the internet of things (IoT), where the whole physical infrastructure is linked with intelligent
monitoring and communication technologies through the use of wireless sensors. In such an
intelligent vibrant system, sensors are connected to send useful information and control
instructions via distributed sensor networks. Wireless sensors have an easy deployment and
better flexibility of devices contrary to wired setup. With the rapid technological development of
sensors, wireless sensor networks (WSNs) will become the key technology for IoT and an
invaluable resource for realizing the vision of Internet of things (IoT) paradigm. It is also
important to consider whether the sensors of a WSN should be completely integrated into IoT or
not. New security challenges arise when heterogeneous sensors are integrated into the IoT. Security needs to be considered at a global perspective, not just at a local scale. This paper gives an overview of sensor integration into IoT, some major security challenges and also a
number of security primitives that can be taken to protect their data over the internet.
Basic Evaluation of Antennas Used in Microwave Imaging for Breast Cancer Dete...csandit
Microwave imaging is one of the most promising techniques in diagnosis and screening of
breast cancer and in the medical field that currently under development. It is nonionizing,
noninvasive, sensitive to tumors, specific to cancers, and low-cost. Microwave measurements
can be carried out either in frequency domain or in time domain. In order to develop a
clinically viable medical imaging system, it is important to understand the characteristics of the
microwave antenna. In this paper we investigate some antenna characteristics and discuss
limitations of existing and proposed systems.
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
When a software system evolves, new requirements may be added, existing functionalities
modified, or some structural change introduced. During such evolution, disorder may be
introduced, complexity increased or unintended consequences introduced, producing rippleeffect
across the system. JHotDraw (JHD), a well-tested and widely used open source Javabased
graphics framework developed with the best software engineering practice was selected
as a test suite. Six versions were profiled and data collected dynamically, from which two metrics were derived namely entropy and software maturity index. These metrics were used to investigate degradation as the software transitions from one version to another. This study observed that entropy tends to decrease as the software evolves. It was also found that a
software product attains its lowest decrease in entropy at the turning point where its highest maturity index is attained, implying a possible correlation between the point of lowest decreasein entropy and software maturity index.
CREATING DATA OUTPUTS FROM MULTI AGENT TRAFFIC MICRO SIMULATION TO ASSIMILATI...csandit
The intensive development of traffic engineering and technologies that are integrated into
vehicles, roads and their surroundings, bring opportunities of real time transport mobility
modeling. Based on such model it is then possible to establish a predictive layer that is capable
of predicting short and long term traffic flow behavior. It is possible to create the real time
model of traffic mobility based on generated data. However, data may have different
geographical, temporal or other constraints, or failures. It is therefore appropriate to develop
tools that artificially create missing data, which can then be assimilated with real data. This
paper presents a mechanism describing strategies of generating artificial data using
microsimulations. It describes traffic microsimulation based on our solution of multiagent
framework over which a system for generating traffic data is built. The system generates data of
a structure corresponding to the data acquired in the real world.
Robust Visual Tracking Based on Sparse PCA-L1csandit
Recently, visual tracking based on sparse principle component analysis has drawn much
research attention. As we all know, principle component analysis (PCA) is widely used in data
processing and dimensionality reduction. But PCA is difficult to interpret in practical
application and all those principal components are linear combinations of all variables. In our
paper, a novel visual tracking method based on sparse principal component analysis and L1
tracking is introduced, which we named the method SPCA-L1 tracking. We firstly introduce
trivial templates of L1 tracking method, which are used to describe noise, into PCA appearance
model. Then we use lasso model to achieve sparse coefficients. Then we update the eigenbasis
and mean incrementally to make the method robust when solving different kinds of changes of
the target. Numerous experiments, where the targets undergo large changes in pose, scale and
illumination, demonstrate the effectiveness and robustness of the proposed method.
A LITERATURE REVIEW ON SEMANTIC WEB – UNDERSTANDING THE PIONEERS’ PERSPECTIVEcsandit
There are various definitions, view and explanations about Semantic Web, its usage and its underlying architecture. However, the various flavours of explanations seem to have swayed way off-topic to the real purpose of Semantic Web. In this paper, we try to review the literature of Semantic Web based on the original views of the pioneers of Semantic Web which includes, Sir Tim Berners-Lee, Dean Allemang, Ora Lassila and James Hendler. Understanding the vision of the pioneers of any technology is cornerstone to the development. We have broken down Semantic Web into two approaches which allows us to reason with why Semantic Web is not mainstream.
Explore the Effects of Emoticons on Twitter Sentiment Analysis csandit
In recent years, Twitter Sentiment Analysis (TSA) has become a hot research topic. The target of
this task is to analyse the sentiment polarity of the tweets. There are a lot of machine learning
methods specifically developed to solve TSA problems, such as fully supervised method,
distantly supervised method and combined method of these two. Considering the specialty of
tweets that a limitation of 140 characters, emoticons have important effects on TSA. In this
paper, we compare three emoticon pre-processing methods: emotion deletion (emoDel),
emoticons 2-valued translation (emo2label) and emoticon explanation (emo2explanation).
Then, we propose a method based on emoticon-weight lexicon, and conduct experiments based
on Naive Bayes classifier, to validate the crucial role emoticons play on guiding emotion
tendency in a tweet. Experiments on real data sets demonstrate that emoticons are vital to TSA.
A FLOATING POINT DIVISION UNIT BASED ON TAYLOR-SERIES EXPANSION ALGORITHM AND...csandit
Floating point division, even though being an infrequent operation in the traditional sense, is
indis-pensable when it comes to a range of non-traditional applications such as K-Means
Clustering and QR Decomposition just to name a few. In such applications, hardware support
for floating point division would boost the performance of the entire system. In this paper, we
present a novel architecture for a floating point division unit based on the Taylor-series
expansion algorithm. We show that the Iterative Logarithmic Multiplier is very well suited to be
used as a part of this architecture. We propose an implementation of the powering unit that can
calculate an odd power and an even power of a number simultaneously, meanwhile having little
hardware overhead when compared to the Iterative Logarithmic Multiplier.
What is Sleeping pralysis and how it effect us ?Harshit Agarwal
Sleep paralysis consists of a period when one cannot perform voluntary movements. REM sleep- The stage of sleep marked by rapid eye movements, dreaming, and paralysis of motor systems; sleep with an activated brain.
Restorative effects of sleep appear to be more important for brain than rest of body.
Market based instruments as a policy instrument for environmental problemsGlen Speering
A short (15min) presentation on examples of market based instruments for addressing environmental problems. Definitions, caveats and popularity are covered.
FUTURE OF PEER-TO-PEER TECHNOLOGY WITH THE RISE OF CLOUD COMPUTINGijp2p
Peer-to-Peer (P2P) networking emerged as a disruptive business model displacing the server based
networks within a point in time.P2P technologies are on the edge of becoming all-purpose in developing
several applications for social networking. In the past seventeen years, research on P2P computing and
systems has received enormous amount of attention in the areas of academia and the industry. P2P rose to
triumphant profit-making systems in the internet. It represents the best incarnation of the end to end
argument, the frequently disputed design philosophies that guided the design of the internet. The doubting
factor then is why is research on P2P computing now fading from the spotlight and suffering a nose dive
fall as dramatic as its rise to its popularity. This paper is going to capture a quick look at past results in
peer-to-peer computing with focus on understanding what led to its rise, what contributed to its commercial
success and what has led to its lack of interest. The insight of this paper introduces cloud computing as a
paradigm to peer-to-peer computing.
FUTURE OF PEER-TO-PEER TECHNOLOGY WITH THE RISE OF CLOUD COMPUTINGijp2p
Peer-to-Peer (P2P) networking emerged as a disruptive business model displacing the server based
networks within a point in time.P2P technologies are on the edge of becoming all-purpose in developing
several applications for social networking. In the past seventeen years, research on P2P computing and
systems has received enormous amount of attention in the areas of academia and the industry. P2P rose to
triumphant profit-making systems in the internet. It represents the best incarnation of the end to end
argument, the frequently disputed design philosophies that guided the design of the internet. The doubting
factor then is why is research on P2P computing now fading from the spotlight and suffering a nose dive
fall as dramatic as its rise to its popularity. This paper is going to capture a quick look at past results in
peer-to-peer computing with focus on understanding what led to its rise, what contributed to its commercial
success and what has led to its lack of interest. The insight of this paper introduces cloud computing as a
paradigm to peer-to-peer computing.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A study of index poisoning in peer topeerIJCI JOURNAL
P2P file sharing systems are the most popular forms of file sharing to date. Its client-server architecture
attains faster file transfers, however with its peer anonymity and lack of authentication it has become a
gold mine for malicious attacks. One of the leading sources of disruptions in the P2P file sharing systems is
the index poisoning attacks. This attack seeks to corrupt the indexes used to reference files available for
download in P2P systems with false data. In order to protect the users from these attacks it is important to
find solutions to eliminate or mitigate the effects of index poisoning attacks. This paper will analyze index
poisoning attacks, their uses and solutions proposed to defend against them.
Peer to Peer Network with its Architecture, Types, and Examples!!DigitalThinkerHelp
Today, here we are going to cover all possible things about is peer to peer network with its architecture and types; involving with several examples of peer to peer network with ease.
Analysis of threats and security issues evaluation in mobile P2P networks IJECEIAES
Technically, mobile P2P network system architecture can consider as a distributed architecture system (like a community), where the nodes or users can share all or some of their own software and hardware resources such as (applications store, processing time, storage, network bandwidth) with the other nodes (users) through Internet, and these resources can be accessible directly by the nodes in that system without the need of a central coordination node. The main structure of our proposed network architecture is that all the nodes are symmetric in their functions. In this work, the security issues of mobile P2P network system architecture such as (web threats, attacks and encryption) will be discussed deeply and then we propose different approaches and we analysis and evaluation of these mobile P2P network security issues and submit some proposal solutions to resolve the related problems with threats and other different attacks since these threats and attacks will be serious issue as networks are growing up especially with mobility attribute in current P2P networks.
SECURITY PROPERTIES IN AN OPEN PEER-TO-PEER NETWORKIJNSA Journal
This paper proposes to address new requirements of confidentiality, integrity and availability properties fitting to peer-to-peer domains of resources. The enforcement of security properties in an open peer-topeer network remains an open problem as the literature have mainly proposed contribution on availability of resources and anonymity of users. That paper proposes a novel architecture that eases the administration of a peer-to-peer network. It considers a network of safe peer-to-peer clients in the sense that it is a commune client software that is shared by all the participants to cope with the sharing of various resources associated with different security requirements. However, our proposal deals with possible malicious peers that attempt to compromise the requested security properties. Despite the safety of an open peer-to-peer network cannot be formally guaranteed, since a end user has privileges on the target host, our solution provides several advanced security enforcement. First, it enables to formally define the requested security properties of the various shared resources. Second, it evaluates the trust and the reputation of the requesting peer by sending challenges that test the fairness of its peer-to-peer security policy. Moreover, it proposes an advanced Mandatory Access Control that enforces the required peer-to-peer security properties through an automatic projection of the requested properties onto SELinux policies. Thus, the SELinux system of the requesting peer is automatically configured with respect to the required peer-to-peer security properties. That solution prevents from a malicious peer that could use ordinary applications such as a video reader to access confidential files such as a video requesting fee paying. Since the malicious peer could try to abuse the system, SELinux challenges and traces are also used to evaluate the fairness of the requester. That paper ends with different research perspectives such as a dedicated MAC system for the peer-to-peer client and honeypots for testing the security of the proposed peer-to-peer infrastructure.
Adaptive Sliding Piece Selection Window for BitTorrent SystemsWaqas Tariq
Peer to peer BitTorrent (P2P BT) systems are used for video-on-Demand (VoD) services. Scalability problem could face this system and would cause media servers not to be able to respond to the users’ requests on time. Current sliding window methods face problems like waiting for the window pieces to be totally downloaded before sliding to the next pieces and determining the window size that affects the video streaming performance. In this paper, a modification is developed for BT systems to select video files based on sliding window method. Developed system proposes using two sliding windows, High and Low, running simultaneously. Each window collects video pieces based on the user available bandwidth, video bit rate and a parameter that determines media player buffered seconds. System performance is measured and evaluated against other piece selection sliding window methods. Results show that our method outperforms the benchmarked sliding window methods
On client’s interactive behaviour to design peer selection policies for bitto...IJCNCJournal
Peer-to-peer swarming protocols have been proven to be very efficient for content replication over Internet.
This fact has certainly motivated proposals to adapt these protocols to meet the requirements of on-demand
streaming system. The vast majority of these proposals focus on modifying the piece and peer selection
policies, respectively, of the original protocols. Nonetheless, it is true that more attention has often been
given to the piece selection policy rather than to the peer selection policy. Within this context, this article
proposes a simple algorithm to be used as basis for peer selection policies of BitTorrent-like protocols,
considering interactive scenarios. To this end, we analyze the client’s interactive behaviour when accessing
real multimedia systems. This analysis consists of looking into workloads of real content providers and
assessing three important metrics, namely temporal dispersion, spatial dispersion and object position
popularity. These metrics are then used as the main guidelines for writing the algorithm. To the best of our
knowledge, this is the first time that the client’s interactive behaviour is specially considered to derive an
algorithm for peer selection policies. Finally, the conclusion of this article is drawn with key challenges
and possible future work in this research field.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This is brief presentation on BitTorrent technology.
(caution: avoid using site mentioned in the ppt for downloading torrent file.. b t jun kie ....it may not be safe..)
(Thanks to Soumya and my other colleagues for the help.)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
2. 30 Computer Science & Information Technology (CS & IT)
2. BACKBONE TECHNOLOGIES
Prior to diving deeper into the survey, it is important to have some knowledge of the underlying
technologies of the reviewed services. While the technologies used by cloud-based storage
systems are quite straight-forward, P2P infrastructures are of a more complex nature. This is why
we briefly describe important P2P protocols in this section.
It should be noted that P2P might have slightly different meanings in different contexts. The
definition presented in this paper is according to [3]. In a P2P system, each peer in the system
provides the service that it is intended to, by sharing its resources (eg: storage and processing
power). These peers communicate directly, without the need of an intermediate node.
A pure P2P system is fully decentralized, but partially centralized P2P systems do exist. The best
example of such a P2P system is the BitTorrent protocol, wherein a central server, a tracker,
tracks the peers currently downloading each file. Other peers can then contact the tracker and
request the list of these peers, and contact them.
For a truly decentralized P2P network to exist, the nodes first need to find other nodes in the
network (peer discovery). In a local network, a simple scan could reveal other nodes that
participate, or are interested in participating in a local P2P network. Over a wider network (such
as the Internet), however, this would be a non-trivial task, as it would be unfeasible for a node to
look up the entire network to connect to the ones that share the same interests.
There are a few protocols that allow peers to discover each other in a P2P network. In the
following subsection, we review Pastry [4], a P2P discovery protocol; we then review BitTorrent,
the defacto P2P standard.
2.1. Pastry
In [4], Rowstron and Druschel presented Pastry a new object location protocol for large scale P2P
systems. Pastry performs application level node look up and routing over a large network
connected via the Internet; when a node receives a message along with a key, it routes the
message along all the live nodes to the node which has a nodeID ’numerically’ closest to the key.
Each node in Pastry keeps track of its immediate neighbors, and notifies other nodes of any
changes in the network, such as when a new node joins the network, or if one leaves the network.
Pastry is completely decentralized, and aims to reduce the routing steps that messages have to
take to reach the destination. The expected number of routing steps in Pastry is O(log N), where
N is the number of Pastry nodes in the network.
A nodeID is randomly calculated, and ranges from 0 to 2128−1, allowing them to be “diverse in
geography, ownership, jurisdiction, etc.” A node is said to be “close” to be another node if its
nodeID is numerically close to the key that it receives along with the message. The message is
routed to one of such closest nodes in Pastry, which is usually a node near the originator node.
An example of an application of Pastry is PAST [5], a largescale P2P file storage utility,
developed by the same authors. More on PAST is detailed in the next section.
3. Computer Science & Information Technology (CS & IT) 31
2.2. BitTorrent
According to the official specification, [6], BitTorrent is a P2P file sharing protocol, used to
transfer files of any size across the web, and according to [7], it was created by Bram Cohen to
replace the standard FTP. It uses a server (called a tracker) that tracks the files, and aids the
clients in downloading and combining the chunks (pieces, according to [6]) of the file into the
original file. There are, however, “trackerless” implementations of the protocol, which create a
true decentralized environment for BitTorrent based P2P file transfers.
Unlike a typical P2P network, BitTorrent ensures that each client uploads files while
downloading files from other peers, ensuring fairness, better availability of files, and a boost in
performance.
3. FILE SYNCHRONIZATION SYSTEMS
This section presents the most notable file synchronization systems. We distinguish two major
categories: cloud-based file synchronization systems, and P2P-based file synchronization
systems.
3.1. Cloud-based File Synchronization Systems
A cloud-based synchronization system (also a cloud-based storage service) is used to store users’
files in a central server, owned and governed by a certain entity (eg: an enterprise, or a small
company). Users upload their files to this server from one device, and download them on another
(or on the same device, in case the user loses the original file). Users can also share their files
with others, and depending on the service provided, a cloud-based synchronization service can be
extended to provide a collaboration platform to the users.
These services are provided across many different platforms, using web as well as native
application development technologies as their front-end. Some of them provide desktop
applications that act as drives connected to the PC, to provide a seamless interaction with the
actual cloud drive. These services usually employ a freemium model: a fixed amount of initial
storage is given for free, with limited feature set, while allowing users to upgrade to a higher plan
with more storage and additional features. A good comparison of some of the most popular cloud
storage and synchronization services can be seen in [8]. Such a model makes cloud services much
more accessible and convenient to the users.
3.1.1. Google Drive
Google Drive [2] is a file storage and synchronization service by Google. At the time of writing
this paper, new users to the service get 15 GB of storage for free, with various monthly
subscription plans available for more storage [9].
Users can not only store and synchronize their files using Google Drive, they can also view,
modify, delete, and in some instances, collaborate on them with other users, using either the web
interface, or a native applications available on major platforms. Google Drive supports a plethora
of file formats for a user to store, synchronize, and work with.
4. 32 Computer Science & Information Technology (CS & IT)
3.1.2. OneDrive
OneDrive [10] by Microsoft is a file storage and synchronization service with similar features to
Google Drive, and is powered by Microsoft Azure [11], Microsoft’s cloud computing platform.
As of January 2016, OneDrive has dropped down its storage capacity for new users from 15 GB
to 5 GB. Users who had obtained the 15 GB previously would retain it. Like Google Drive,
OneDrive allows users to upgrade the storage using one of the various monthly subscription plans
[12].
Along with file storage and synchronization, OneDrive allows users to view, update and delete
the files, and collaborate on them using Office Online - a free online Microsoft Office utility.
3.1.3. iCloud Drive
A cloud storage and synchronization service identical to Google Drive and OneDrive, iCloud
Drive [13] by Apple offers similar features to users as the previously mentioned cloud-based
services. In terms of file storage capacity, iCloud Drive offers 5 GB of free space to new users,
like Microsoft’s OneDrive, with plans for upgrade available [14].
According to [15], iCloud is utilizes both Amazon Web Services (AWS) by Amazon [16], and
Microsoft Azure [11] since 2011 (when iCloud first launched). However, there are numerous
reports which state that Apple is siding with Google’s Google Cloud Platform [17] to provide
some of iCloud’s services [18] [19] [20].
3.1.4. Dropbox
Dropbox [1] is one of the most popular file storage and synchronization service, created not by
large entities such as those mentioned above, but by a startup company of the same name.
Dropbox offers 2 GB of storage space initially to new users, with options to upgrade to 1 TB,
with a monthly subscription (or unlimited storage for Business users) [21].
3.2. P2P-based File Synchronization Systems
A P2P-based synchronization system, unlike a cloud-based synchronization system, is a
decentralized system wherein each peer in the network acts as both a server, as well as a client, to
synchronize files between a user’s authorized devices. In this system, files are broken down into
encrypted pieces, and each peer uploads a certain number of pieces to, and downloads from, other
nodes in the system, ensuring that the files are almost always available for synchronization, and
that no one peer contains the complete file, thus enforcing privacy and security of the users’ data.
Furthermore, the load is divided among the connected peers, rather than a single server, thus
increasing the performance of the synchronization process.
Like centralized cloud synchronization services, P2P service providers provide a similar business
model of a free though limited plan, while setting additional storage space up for purchase.
However, unlike centralized cloud storage and synchronization services, it is much more efficient
and convenient to conjure a private P2P cloud service with possibly unlimited storage (as storage
space depends upon the storage shared by each node many nodes equal a lot of storage).
5. Computer Science & Information Technology (CS & IT) 33
Below are some of the examples of such a system.
3.2.1. PAST
PAST is an application of Pastry, developed by the developers of Pastry themselves. PAST
extends Pastry’s capabilities to form a peer-to-peer file storage system that uses a file’s name, as
well as the owner’s name, to calculate a hash which is used as its fileID. The fileID is used as the
key in PAST.
3.2.2. Symform
Symform by Quantum [22] is a popular P2Pbased file synchronization service, in which each
node forms a cloud in the decentralized network, and contributes its resources (storage space),
while receiving certain amount of space itself from other nodes.
In Symform, files are broken down into blocks, encrypted, and spread across the network. This
way, the files are always available for synchronization, privacy is maintained, security is
enforced, and the synchronization performance enhanced on the network.
3.2.3. Resilio Connect
Resilio Connect (formerly Sync, by BitTorrent, Inc.) [23] creates a P2P cloud using BitTorrent
among a user’s devices, rather than including external nodes into the network. This makes the
cloud even more secure, but reduces the reliability of the synchronization service, as offline
nodes cannot transmit or receive files.
4. DISCUSSION
Table 1 compares the existing technologies and services we mentioned in the previous section.
As can be seen from the table, P2P-based file synchronization systems tend to offer the most
value to the consumers than the cloud-based services in terms of storage capacity.
P2P-based systems offer potentially unlimited storage, as each node in the network acts as a
server as well as a client. Furthermore, since the pieces of files are replicated on multiple nodes,
even if a node is (or a set of nodes containing those pieces are) offline, downloaders can obtain
those pieces from the online nodes, thus making the files more readily available for
synchronization, and the network more reliable. All nodes in the network need to go offline at the
same time for the network to be completely down. Moreover, since the pieces are encrypted, and
scattered across the network, security and privacy are ensured in such systems.
Resilio Connect is the only P2P-based synchronization system that is powered by BitTorrent, and
inherits almost all the benefits of other P2P-based systems. Although the table shows that Resilio
Connect may not have the same level of performance and availability of files as the other
systems, a BitTorrent powered synchronization can, in fact, be developed with these advantages.
In what follows, we discuss why we expect Resilio Connect, and more generally BitTorrent-
based systems, to be the go-to technology for file synchronization systems.
6. 34 Computer Science & Information Technology (CS & IT)
Table 1. Comparison of existing file synchronization technologies and services.
4.1. BitTorrent Advantages
The main reasons in focusing on BitTorrent in this paper to give insights on the superiority of
BitTorrent-powered, P2Pbased file sharing and synchronization systems are:
4.1.1. Popularity
According to statistics released by BitTorrent, Inc. [24], there are 45 million daily active users,
whereas on a monthly scale, a staggering 170 million users are active each month.
BitTorrent is very popular among the younger population, with 63% of the users aged 34 and
below [24]. Furthermore, most of these users are “educated and tech-savvy” males, according to
BitTorrent.
It should be noted that, although coming from the official website, these statistics are not
complete, as it is rather difficult to collect stats on BitTorrent, due to its nature of being used in a
decentralized, and at many times a private networking environment.
7. Computer Science & Information Technology (CS & IT) 35
4.1.2. Availability
Since BitTorrent is a P2P network, the complete file is almost always available to be
downloaded, as long as a single peer is online in the network (assuming it contains the whole
file). Furthermore, since the files are divided into pieces, individual pieces can be downloaded
from the online nodes. Missing pieces can be downloaded from nodes once they come online.
Comparing this to a client/server architecture, wherein a server holds the file to be downloaded,
one can definitely see how reliable a P2P network is, more so the BitTorrent protocol.
4.1.3. Performance
Several research works focused on the capabilities of a P2P network, many of which report the
performance gains when downloading files using BitTorrent.
Raymond et al. measured the load on centralized servers when using BitTorrent conjointly [25].
The paper showed how BitTorrent reduces the load on a server, and increases the download
performance. Using various technologies and measurements, this research presents various tests
and analyses results on the performance of the BitTorrent protocol.
As noted above, along with the performance gains, BitTorrent, being a P2P protocol, also reduces
the server load by making each node in the network act like a server. Moreover, the network
adjusts accordingly to new nodes joining it, or nodes going offline, thus making the network
more scalable.
4.1.4. Scalability
In a P2P system, each client is a potential server. That is, increasing demand translates into
increasing offer. This results in the unique scalability that characterizes P2P systems. This is
unlike a typical server/client architecture, in which a server has to handle an increase, or even a
decrease in the number of connected clients. An increase in the number of clients increases the
server load, whereas a decrease in the number makes the system less efficient.
4.2. BitTorrent Limitations
BitTorrent may inherit the advantages of a P2P network, but it does come with its limitations.
The most prominent limitation of the protocol being is its security. There is a number of well-
known security holes in BitTorrent [26], [27], including Authentication, Authorization and Trust
& Reputation.
We reviewed the already available P2P file synchronization technologies that have already
implemented security in their systems. One of the best examples of such a service is Symform,
which encrypts file chunks before replicating them on the network [28]. These systems provide
confidentiality and data integrity by encrypting the file chunks. They also provide authentication
of the user, by a username and password combination, prior to sharing or downloading files.
8. 36 Computer Science & Information Technology (CS & IT)
5. CONCLUSIONS
We reviewed various file sharing and synchronization technologies and services in this paper. We
also compared and discussed these technologies and services, and presented our arguments on
why we believe that P2P-based, or more specifically, BitTorrent powered file synchronization
systems are superior to traditional cloud-based file synchronization systems, and should be the
go-to technologies for reliable and secure file sharing and synchronization services. Future works
should focus on enabling online collaboration over P2P-based synchronization systems.
REFERENCES
[1] Dropbox, “Dropbox,” http://www.dropbox.com, [Online; accessed 14June-2016].
[2] Google, “Google drive,” https://drive.google.com, [Online; accessed 14June-2016].
[3] G. Camarillo, “Peer-to-peer (p2p) architecture: definition, taxonomies, examples, and applicability,”
2009.
[4] A. Rowstron and P. Druschel, “Pastry: Scalable, decentralized object location, and routing for large-
scale peer-to-peer systems,” in Middleware 2001. Springer, 2001, pp. 329–350.
[5] P. Druschel and A. Rowstron, “Past: A large-scale, persistent peerto-peer storage utility,” in Hot
Topics in Operating Systems, 2001. Proceedings of the Eighth Workshop on. IEEE, 2001, pp. 75–80.
[6] B. Cohen, “The BitTorrent protocol specification version 11031,” http://www.bittorrent.org/beps/bep
0003.html, 2013, [Online; accessed 14-June-2016].
[7] J. Fonseca, B. Reza, and L. Fjeldsted, “Bittorrent protocol – btp/1.0,”
http://jonas.nitro.dk/bittorrent/bittorrent-rfc.html, 2005, [Online; accessed 14-June-2016].
[8] S. Mitroff, “Onedrive, dropbox, google drive and box: Which cloud storage service is right for you?”
http://www.cnet.com/how-to/onedrivedropbox-google-drive-and-box-which-cloud-storage-service-is-
right-foryou/, 2016, [Online; accessed 14-June-2016].
[9] Google, “Google drive storage plans and pricing,”
https://support.google.com/drive/answer/2375123?hl=en, [Online; accessed 14-June-2016].
[10] Microsoft, “Onedrive,” http://onedrive.live.com, [Online; accessed 14June-2016].
[11] ——, “Microsoft azure: Cloud computing platform & services,” https://azure.microsoft.com/en-us/,
[Online; accessed 14-June-2016].
[12] ——, “Microsoft onedrive plans,” https://onedrive.live.com/about/enUS/plans/, [Online; accessed 14-
June-2016].
[13] Apple, “icloud,” http://www.icloud.com, [Online; accessed 14-June2016].
[14] ——, “icloud storage plans and pricing,” https://support.apple.com/enae/HT201238, [Online;
accessed 14-June-2016].
[15] ——, “ios security guide,” https://www.apple.com/business/docs/iOS Security Guide.pdf, [Online;
accessed 14-June-2016].
9. Computer Science & Information Technology (CS & IT) 37
[16] Amazon, “Amazon web services - cloud computing services,” https://aws.amazon.com/, [Online;
accessed 14-June-2016].
[17] Google, “Google cloud computing, hosting services & apis,” https://cloud.google.com/, [Online;
accessed 14-June-2016].
[18] MacRumors, “Apple inks deal to use google cloud platform for some icloud services,”
http://www.macrumors.com/2016/03/16/apple-icloudgoogle-cloud-platform/, [Online; accessed 14-
June-2016].
[19] CRN, “Cloud makes for strange bedfellows: Apple signs on with google, cuts spending with aws,”
http://www.crn.com/news/cloud/300080062/cloud-makes-for-strangebedfellows-apple-signs-on-with-
google-cuts-spending-with-aws.htm, [Online; accessed 14-June-2016].
[20] B.Insider, “Google nabs apple as a cloud customer,”
http://www.businessinsider.com/google-nabs-apple-as-a-cloudcustomer-2016-3, [Online; accessed
14-June-2016].
[21] Dropbox, “Dropbox plans comparison,” https://www.dropbox.com/business/plans-comparison,
[Online; accessed 14-June-2016].
[22] Symform, “Symform: Free online backup service,” https://www.symform.com/, [Online; accessed
14-June-2016].
[23] Resilio, “Bittorrent sync,” http://www.getsync.com, [Online; accessed 14-June-2016].
[24] BitTorrent, “Bittorrent - advertise with us,” http://www.bittorrent.com/lang/en/advertise, [Online;
accessed 14June-2016].
[25] R. L. Xia and J. K. Muppala, “A survey of bittorrent performance,” Communications Surveys &
Tutorials, IEEE, vol. 12, no. 2, pp. 140– 158, 2010.
[26] R. Guha and D. Purandare, “Security issues in bittorrent like p2p streaming systems,” SIMULATION
SERIES, vol. 38, no. 4, p. 423, 2006.
[27] M. Barcellos, “Security issues and perspectives in p2p systems: from gnutella and bittorrent,”
http://webhost.laas.fr/TSF/IFIPWG/Workshops&Meetings/53/workshop/ 8.Barcellos.pdf, 2008,
[Online; accessed 14-June-2016].
[28] Symform, “The most secure cloud storage — symform,”
http://www.symform.com/how-it-works/security, [Online; accessed 14-June-2016].
AUTHORS
Zulqarnain Mehdi (Zul) is currently pursuing his MSc degree in IT (Software Systems)
from Heriot-Watt University, Dubai. He is currently employed as a Software Engineer in
a Dubai-based company.
Zul’s research interests include cloud storage systems, file sharing, peer-to-peer, and
BitTorrent.
10. 38 Computer Science & Information Technology (CS & IT)
Hani RAGAB received the MSc degree from the university of technology of Compiegne
(UTC), France, in 2003, and the Ph.D degree from the same university in 2007. He is
currently a lecturer at Heriot-Watt University, United Kingdom.
His research interests include malware analysis, access control systems, peer-to-peer, and
digital forensics.