The advent of Big Data has presented nee challenges in terms of Data Security. There is an increasing need of research
in technologies that can handle the vast volume of Data and make it secure efficiently. Current Technologies for securing data are
slow when applied to huge amounts of data. This paper discusses security aspect of Big Data.
Big Data Security and Privacy - Presentation to AFCEA Cyber Symposium 2014kevintsmith
In our era of “Big Data”, organizations are collecting, analyzing, and making decisions based on analysis of massive amounts of data sets from various sources, and security in this process is becoming increasingly more important. With regulations like HIPAA and other privacy protection laws, securing access and determining releasability of data sets is critical. Organizations using Big Data Analytics solutions face challenges, as most of today’s solutions were not designed with security in mind. This presentation focuses on challenges, use cases, and practical real-world solutions related to securing and preserving privacy in Big Data Analytics solutions, addressing authorization, differential privacy, and more.
Expanded top ten_big_data_security_and_privacy_challengesTom Kirby
There is some really great stuff coming out of the CSA working & research groups these days. I found this particular research paper from the big data working group to be extremely relevant and useful
Big Data Security and Privacy - Presentation to AFCEA Cyber Symposium 2014kevintsmith
In our era of “Big Data”, organizations are collecting, analyzing, and making decisions based on analysis of massive amounts of data sets from various sources, and security in this process is becoming increasingly more important. With regulations like HIPAA and other privacy protection laws, securing access and determining releasability of data sets is critical. Organizations using Big Data Analytics solutions face challenges, as most of today’s solutions were not designed with security in mind. This presentation focuses on challenges, use cases, and practical real-world solutions related to securing and preserving privacy in Big Data Analytics solutions, addressing authorization, differential privacy, and more.
Expanded top ten_big_data_security_and_privacy_challengesTom Kirby
There is some really great stuff coming out of the CSA working & research groups these days. I found this particular research paper from the big data working group to be extremely relevant and useful
Practical risk management for the multi cloudUlf Mattsson
This session will take a practical approach to IT risk management and discuss multi cloud, Verizon Data Breach Investigations Report (DBIR) and how Enterprises are losing ground in the fight against persistent cyber-attacks. We simply cannot catch the bad guys until it is too late. This picture is not improving. Verizon reports concluded that less than 14% of breaches are detected by internal monitoring tools.
We will review the JP Morgan Chase data breach were hackers were in the bank’s network for months undetected. Network configuration errors are inevitable, even at the largest banks as Capital One that recently had a data breach where a hacker gained access to 100 million credit card applications and accounts.
Viewers will also learn about:
- Macro trends in Cloud security and Micro trends in Cloud security
- Risks from Quantum Computing and when we should move to alternate forms of encryption
- Review “Kill Chains” from Lockhead Martin in relation to APT and DDoS Attacks
- Risk Management methods from ISACA and other organizations
Speaker: Ulf Mattsson, Head of Innovation, TokenEx
The past, present, and future of big data securityUlf Mattsson
ONE OF THE BIGGEST REMAINING CONCERNS REGARDING HADOOP, PERHAPS SECOND ONLY TO ROI, IS SECURITY.
The Past, Present, and Future of Big Data SecurityWhile Apache Hadoop and the craze around Big Data seem to have exploded out into the market, there are still a lot more questions than answers about this new environment.
Hadoop is an environment with limited structure, high ingestion volume, massive scalability and redundancy, designed for access to a vast pool of multi-structured data. What’s been missing is new security tools to match.
Read more in this article by Ulf Mattsson, Protegrity CTO, originally published by Help Net Security’s (IN)SECURE Magazine.
An extensive research survey on data integrity and deduplication towards priv...IJECEIAES
Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems.
Shifting Risks and IT Complexities Create Demands for New Enterprise Security...Booz Allen Hamilton
Holistic Cyber Risk Management Programs in the Financial Industry Must "Predict and Prevent" in Today's Complex Threat Environment, says new White Paper.
Privacy and Security by Design Spotlight Presentation at HIMMS Privacy and Security Forum, December 5th 2016. Presented by Jeff R. Livingstone, PhD, Vice President and Global Lead, Life Sciences & Healthcare, Unisys Corporation.
Big data security and privacy issues in theIJNSA Journal
Many organizations demand efficient solutions to store and analyze huge amount of information. Cloud computing as an enabler provides scalable resources and significant economic benefits in the form of reduced operational costs. This paradigm raises a broad range of security and privacy issues that must be taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud computing environments. This paper reviews the existing technologies and a wide array of both earlier and state-ofthe-art projects on cloud security and privacy. We categorize the existing research according to the cloud reference architecture orchestration, resource control, physical resource, and cloud service management layers, in addition to reviewing the recent developments for enhancing the Apache Hadoop security as one of the most deployed big data infrastructures. We also outline the frontier research on privacy-preserving data-intensive applications in cloud computing such as privacy threat modeling and privacy enhancing solutions.
Data Virtualization for Accelerated Digital Transformation in Banking and Fin...Denodo
Watch full webinar here: https://bit.ly/37jIyzf
Presented at FST Media Future of Financial Services, Sydney (Australia)
Watch this session on-demand to understand how data virtualization helps finance companies to:
- Modernise their data infrastructure by providing a virtual approach to accessing, managing, and delivering data
- Implement a centralised governance framework and roll out security measures across the enterprise data infrastructure
- Integrate data from multiple sources to create a 360-degree view into customers’ changing needs and behaviours
Future data security ‘will come from several sources’John Davis
The process of digitisation will become more all-encompassing, but will create new data security needs that can only be met by multiple suppliers, a report has said. - See more at: http://www.storetec.net/news-blog/future-data-security-will-come-from-several-sources
IT Solutions for 3 Common Small Business ProblemsBrooke Bordelon
Many time consuming IT problems can be side-stepped by establishing a solid network from the get-go rather than playing catch up with problems as they arise..find out how with these IT solutions.
Partner Keynote: How Logical Data Fabric Knits Together Data Visualization wi...Denodo
Watch full webinar here: https://bit.ly/3aALFEC
Data Visualization and Data Virtualization are complementary technologies. But how do they come together under a common data fabric? This presentation will discuss how organizations are advancing their data fabric capabilities leveraging innovations in these two technologies in areas of self-service, data catalog, cloud, and AI/ML.
Data Lake-based Approaches to Regulatory-Driven Technology ChallengesBooz Allen Hamilton
Booz Allen Hamilton has found that a data lake-based approach to CA3 requirements is scalable, extensible, and improves the range and sophistication of analyses that can be supported while providing higher levels of data control and security.
Data Security by AES Advanced Encryption StandardYogeshIJTSRD
Now a days with the rapid development of multimedia technologies, research on safety and security are becoming more important. Multimedia data are generated and transmitted through the communication channels and the wireless media. The efficiencies of encryption based on different existing algorithms are not up to the satisfactory limit. Hence researchers are trying to modify the existing algorithm or even develop new algorithms that help to increase security with a little encryption time. Here in this paper, we have furnished a new technology to modify the AES algorithm which gives more security with a little encryption time and which can be used to encrypt using 128 bit key. Theoretical analysis on the proposed algorithm with the existing reveals the novelty of our work. Here we have proposed a technique to randomize the key and hidden the key data into an encrypted digital image using the basics concept of cryptography and also using the concept of digital watermarking, the concept of key hide has also been encrypted. We have also proposed a new technique to reposition the pixels to break the correlation between them. So, the proposed scheme offers a more secure and cost effective mechanism for encryption. Next on the AES criteria list good performance. Widespread market adoption will require reasonably good performance on a variety of platforms, ranging from easy tocrack smart cards to the largest servers. Good algorithm performance includes speed for the encryption and decryption process as well as the key schedule. Prateek Goyal | Ms. Shalini Bhadola | Ms. Kirti Bhatia "Data Security by AES (Advanced Encryption Standard)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45073.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-security/45073/data-security-by-aes-advanced-encryption-standard/prateek-goyal
Originally presented at PRIMMA mobile privacy workshop, Imperial College London, 23 Sep 2010. Updated version given at Security and Privacy in Implantable Medical Devices workshop, EPFL, 1 April 2011, and a German Academy of Engineering conference in Berlin on 26 March 2012. Compact version given at Urban Prototyping conference, Imperial College London, 9 April 2013. Updated with ENISA privacy engineering report for 3rd Latin American Data Protection conference in Medellin, 28-29 May 2015.
Book about
Quantum Computing Blockchain Reversable Protection Privacy by Design, Applications and APIs Privacy, Risks, and Threats Machine Learning and Analytics Non-Reversable Protection International Unicode Secure Multi-party Computing Computing on Encrypted Data Internet of Things II. Data Confidentiality and Integrity Standards and Regulations IV. Applications VI. Summary Best Practices, Roadmap, and Vision Trends, Innovation, and Evolution Hybrid Cloud , CASB and SASE Appendix A B C D E I. Introduction and Vision Section Access Control Zero Trust Architecture Trusted Execution Environments III. Users and Authorization Governance, Guidance, and Frameworks V. Platforms Data User App Innovation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Chapter Discovery and Search Glossary
A COMPARATIVE STUDY OF NOSQL SYSTEM VULNERABILITIES WITH BIG DATAIJMIT JOURNAL
With the emerging third wave in the development of the Internet, the past year has witnessed huge data exposure resulting in cyber-attacks that have increased four times as that of the previous year’s record. In this digital era, businesses are making use of NoSQL technologies for managing such Big Data. However, the NoSQL database systems come with inherent security issues, which pose a major challenge to many organisations worldwide. There is a paucity of research studies for exposing the security threats and vulnerabilities of NoSQL technologies comprehensively. This paper presents an in-depth study of NoSQL security issues by performing a detailed comparative study of the security vulnerabilities identified in NoSQL database systems. A set of key security features offered by the four commonly used NoSQL database systems, namely Redis, Cassandra, MongoDB and Neo4j are analysed with an aim to identify their strengths and weaknesses. The vulnerabilities associated with built-in security, encryption, authentication/authorization and auditing that impact Big Data management are compared among these popular NoSQL database systems and the risk levels are identified. In addition, illustrations of possible injection attacks experimented with these NoSQL systems are provided. Finally, a high-level framework is proposed for NoSQL databases with considerations for security measures in Big Data deployments. The discussion forms a significant technical contribution for learners, application developers and Big Data deployers paving way for a better awareness and management of the NoSQL systems in an organization.
Practical risk management for the multi cloudUlf Mattsson
This session will take a practical approach to IT risk management and discuss multi cloud, Verizon Data Breach Investigations Report (DBIR) and how Enterprises are losing ground in the fight against persistent cyber-attacks. We simply cannot catch the bad guys until it is too late. This picture is not improving. Verizon reports concluded that less than 14% of breaches are detected by internal monitoring tools.
We will review the JP Morgan Chase data breach were hackers were in the bank’s network for months undetected. Network configuration errors are inevitable, even at the largest banks as Capital One that recently had a data breach where a hacker gained access to 100 million credit card applications and accounts.
Viewers will also learn about:
- Macro trends in Cloud security and Micro trends in Cloud security
- Risks from Quantum Computing and when we should move to alternate forms of encryption
- Review “Kill Chains” from Lockhead Martin in relation to APT and DDoS Attacks
- Risk Management methods from ISACA and other organizations
Speaker: Ulf Mattsson, Head of Innovation, TokenEx
The past, present, and future of big data securityUlf Mattsson
ONE OF THE BIGGEST REMAINING CONCERNS REGARDING HADOOP, PERHAPS SECOND ONLY TO ROI, IS SECURITY.
The Past, Present, and Future of Big Data SecurityWhile Apache Hadoop and the craze around Big Data seem to have exploded out into the market, there are still a lot more questions than answers about this new environment.
Hadoop is an environment with limited structure, high ingestion volume, massive scalability and redundancy, designed for access to a vast pool of multi-structured data. What’s been missing is new security tools to match.
Read more in this article by Ulf Mattsson, Protegrity CTO, originally published by Help Net Security’s (IN)SECURE Magazine.
An extensive research survey on data integrity and deduplication towards priv...IJECEIAES
Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems.
Shifting Risks and IT Complexities Create Demands for New Enterprise Security...Booz Allen Hamilton
Holistic Cyber Risk Management Programs in the Financial Industry Must "Predict and Prevent" in Today's Complex Threat Environment, says new White Paper.
Privacy and Security by Design Spotlight Presentation at HIMMS Privacy and Security Forum, December 5th 2016. Presented by Jeff R. Livingstone, PhD, Vice President and Global Lead, Life Sciences & Healthcare, Unisys Corporation.
Big data security and privacy issues in theIJNSA Journal
Many organizations demand efficient solutions to store and analyze huge amount of information. Cloud computing as an enabler provides scalable resources and significant economic benefits in the form of reduced operational costs. This paradigm raises a broad range of security and privacy issues that must be taken into consideration. Multi-tenancy, loss of control, and trust are key challenges in cloud computing environments. This paper reviews the existing technologies and a wide array of both earlier and state-ofthe-art projects on cloud security and privacy. We categorize the existing research according to the cloud reference architecture orchestration, resource control, physical resource, and cloud service management layers, in addition to reviewing the recent developments for enhancing the Apache Hadoop security as one of the most deployed big data infrastructures. We also outline the frontier research on privacy-preserving data-intensive applications in cloud computing such as privacy threat modeling and privacy enhancing solutions.
Data Virtualization for Accelerated Digital Transformation in Banking and Fin...Denodo
Watch full webinar here: https://bit.ly/37jIyzf
Presented at FST Media Future of Financial Services, Sydney (Australia)
Watch this session on-demand to understand how data virtualization helps finance companies to:
- Modernise their data infrastructure by providing a virtual approach to accessing, managing, and delivering data
- Implement a centralised governance framework and roll out security measures across the enterprise data infrastructure
- Integrate data from multiple sources to create a 360-degree view into customers’ changing needs and behaviours
Future data security ‘will come from several sources’John Davis
The process of digitisation will become more all-encompassing, but will create new data security needs that can only be met by multiple suppliers, a report has said. - See more at: http://www.storetec.net/news-blog/future-data-security-will-come-from-several-sources
IT Solutions for 3 Common Small Business ProblemsBrooke Bordelon
Many time consuming IT problems can be side-stepped by establishing a solid network from the get-go rather than playing catch up with problems as they arise..find out how with these IT solutions.
Partner Keynote: How Logical Data Fabric Knits Together Data Visualization wi...Denodo
Watch full webinar here: https://bit.ly/3aALFEC
Data Visualization and Data Virtualization are complementary technologies. But how do they come together under a common data fabric? This presentation will discuss how organizations are advancing their data fabric capabilities leveraging innovations in these two technologies in areas of self-service, data catalog, cloud, and AI/ML.
Data Lake-based Approaches to Regulatory-Driven Technology ChallengesBooz Allen Hamilton
Booz Allen Hamilton has found that a data lake-based approach to CA3 requirements is scalable, extensible, and improves the range and sophistication of analyses that can be supported while providing higher levels of data control and security.
Data Security by AES Advanced Encryption StandardYogeshIJTSRD
Now a days with the rapid development of multimedia technologies, research on safety and security are becoming more important. Multimedia data are generated and transmitted through the communication channels and the wireless media. The efficiencies of encryption based on different existing algorithms are not up to the satisfactory limit. Hence researchers are trying to modify the existing algorithm or even develop new algorithms that help to increase security with a little encryption time. Here in this paper, we have furnished a new technology to modify the AES algorithm which gives more security with a little encryption time and which can be used to encrypt using 128 bit key. Theoretical analysis on the proposed algorithm with the existing reveals the novelty of our work. Here we have proposed a technique to randomize the key and hidden the key data into an encrypted digital image using the basics concept of cryptography and also using the concept of digital watermarking, the concept of key hide has also been encrypted. We have also proposed a new technique to reposition the pixels to break the correlation between them. So, the proposed scheme offers a more secure and cost effective mechanism for encryption. Next on the AES criteria list good performance. Widespread market adoption will require reasonably good performance on a variety of platforms, ranging from easy tocrack smart cards to the largest servers. Good algorithm performance includes speed for the encryption and decryption process as well as the key schedule. Prateek Goyal | Ms. Shalini Bhadola | Ms. Kirti Bhatia "Data Security by AES (Advanced Encryption Standard)" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-5 , August 2021, URL: https://www.ijtsrd.com/papers/ijtsrd45073.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-security/45073/data-security-by-aes-advanced-encryption-standard/prateek-goyal
Originally presented at PRIMMA mobile privacy workshop, Imperial College London, 23 Sep 2010. Updated version given at Security and Privacy in Implantable Medical Devices workshop, EPFL, 1 April 2011, and a German Academy of Engineering conference in Berlin on 26 March 2012. Compact version given at Urban Prototyping conference, Imperial College London, 9 April 2013. Updated with ENISA privacy engineering report for 3rd Latin American Data Protection conference in Medellin, 28-29 May 2015.
Book about
Quantum Computing Blockchain Reversable Protection Privacy by Design, Applications and APIs Privacy, Risks, and Threats Machine Learning and Analytics Non-Reversable Protection International Unicode Secure Multi-party Computing Computing on Encrypted Data Internet of Things II. Data Confidentiality and Integrity Standards and Regulations IV. Applications VI. Summary Best Practices, Roadmap, and Vision Trends, Innovation, and Evolution Hybrid Cloud , CASB and SASE Appendix A B C D E I. Introduction and Vision Section Access Control Zero Trust Architecture Trusted Execution Environments III. Users and Authorization Governance, Guidance, and Frameworks V. Platforms Data User App Innovation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Chapter Discovery and Search Glossary
A COMPARATIVE STUDY OF NOSQL SYSTEM VULNERABILITIES WITH BIG DATAIJMIT JOURNAL
With the emerging third wave in the development of the Internet, the past year has witnessed huge data exposure resulting in cyber-attacks that have increased four times as that of the previous year’s record. In this digital era, businesses are making use of NoSQL technologies for managing such Big Data. However, the NoSQL database systems come with inherent security issues, which pose a major challenge to many organisations worldwide. There is a paucity of research studies for exposing the security threats and vulnerabilities of NoSQL technologies comprehensively. This paper presents an in-depth study of NoSQL security issues by performing a detailed comparative study of the security vulnerabilities identified in NoSQL database systems. A set of key security features offered by the four commonly used NoSQL database systems, namely Redis, Cassandra, MongoDB and Neo4j are analysed with an aim to identify their strengths and weaknesses. The vulnerabilities associated with built-in security, encryption, authentication/authorization and auditing that impact Big Data management are compared among these popular NoSQL database systems and the risk levels are identified. In addition, illustrations of possible injection attacks experimented with these NoSQL systems are provided. Finally, a high-level framework is proposed for NoSQL databases with considerations for security measures in Big Data deployments. The discussion forms a significant technical contribution for learners, application developers and Big Data deployers paving way for a better awareness and management of the NoSQL systems in an organization.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Encryption Data Measurement and Data Security of Hybrid AES and RSA Algorithmijtsrd
Transferring the secure data becomes vital for the important data of every days life. The system is a secure data transferring, combining the two techniques, Advance Encryption Standard AES and RSA. The data file is encrypted by using AES with the randomly generated key. This key is also encrypted by RSA using the receivers public key. The encrypted data file and encrypted key file are combined to form an output file to send to the receiver. The file at the receivers side is separated into two files, the encrypted data file and encrypted key file. The encrypted key file is decrypted by RSA method using the receivers private key. The AES key is acquired, the encrypted data file is then decrypted by AES method using the acquired AES key. The system includes file transferring process based on the Java socket technology. Khet Khet Khaing Oo | Yan Naung Soe "Encryption Data Measurement and Data Security of Hybrid AES and RSA Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29243.pdf Paper URL: https://www.ijtsrd.com/computer-science/computer-security/29243/encryption-data-measurement-and-data-security-of-hybrid-aes-and-rsa-algorithm/khet-khet-khaing-oo
Multi-dimensional cubic symmetric block cipher algorithm for encrypting big datajournalBEEI
The advanced technology in the internet and social media, communication companies, health care records and cloud computing applications made the data around us increase dramatically every minute and continuously. These renewals big data involve sensitive information such as password, PIN number, credential numbers, secret identifications and etc. which require maintaining with some high secret procedures. The present paper involves proposing a secret multi-dimensional symmetric cipher with six dimensions as a cubic algorithm. The proposed algorithm works with the substitution permutation network (SPN) structure and supports a high processing data rate in six directions. The introduced algorithm includes six symmetry rounds transformations for encryption the plaintext, where each dimension represents an independent algorithm for big data manipulation. The proposed cipher deals with parallel encryption structures of the 128-bit data block for each dimension in order to handle large volumes of data. The submitted cipher compensates for six algorithms working simultaneously each with 128-bit according to various irreducible polynomials of order eight. The round transformation includes four main encryption stages where each stage with a cubic form of six dimensions.
Different date block size using to evaluate the performance between different...IJCNCJournal
The different computer networks whether wired or wireless are becoming more popular with its high
security aspect. Different security algorithms and technique are using to avoid any aforementioned attacks.
One of these technique is a cryptography technique that makes the data as unreadable during the transfer
hence; there is no chance to reclaim the information. Presently, most of the users are using various media
types and internet to transfer the data but, it has the chance to retrieve the data by using these media types.
The perfect solution for this problem is to provide security on time-to-time basis; this stage is always
significant to the security related community discussions. This paper explains the comparison between the
run time of three different encryption algorithms which are DES, AES and Blowfish The compression
includes using different modes, data block size and different operation modes. As a result, Blowfish
algorithm followed by AES take less time for running compared to DES.
Prevention of Cheating Message based on Block Cipher using Digital Envelopeiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Overview on Symmetric Key Encryption AlgorithmsIJERA Editor
In today’s digital communication era sharing of information is increasing significantly. The information being transmitted is vulnerable to various passive and active attacks. Therefore, the information security is one of the most challenging aspects of communication. Cryptography is the one of the main categories of computer security that converts information from its normal form into an unreadable form by using Encryption and Decryption Techniques. The two main characteristics that identify and differentiate one encryption algorithm from another are its ability to secure the protected data against attacks and its speed and efficiency in doing so. There are basically two techniques of cryptography Symmetric and Asymmetric. This paper presents a detailed study of the symmetric encryption techniques.
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
1. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 159
Solve Big Data Security Issues
Abstract: The advent of Big Data has presented nee challenges in terms of Data Security. There is an increasing need of research
in technologies that can handle the vast volume of Data and make it secure efficiently. Current Technologies for securing data are
slow when applied to huge amounts of data. This paper discusses security aspect of Big Data.
Keywords: Big Data; Challenges; Security and Privacy
1. INTRODUCTION
Big data is a term for massive data sets having large, more
varied and complex structure with the difficulties of storing,
analyzing and visualizing for further processes or results.
The concept of big data has been endemic within computer
science since the earliest days of computing. “Big Data”
originally meant the volume of data that could not be
processed by traditional database methods and tools. Each
time anew storage medium was invented, the amount of data
an accessible exploded because it could be easily accessed.
Big Data is characterized by three aspects: (a) the data are
numerous, (b) the data cannot be categorized into regular
relational databases, and (c) data are generated, captured,
and processed very quickly. Big Data is promising for
business application and is rapidly increasing as a segment
of the IT industry. The variety, velocity and volume of big
data amplifies security management challenges that are
addressed in traditional security management. Big data
security challenges arise because of incremental differences,
not fundamental ones. The differences between big data
environments and traditional data environments include:
The data collected, aggregated, and analyzed for
big data analysis
The infrastructure used to store and house big data
The technologies applied to analyze structured and
unstructured big data
The variety, velocity and volume of big data
amplifies security management challenges that are
addressed in traditional security management. Big data
repositories will likely include information deposited by
various sources across the enterprise. This variety of data
makes secure access management a challenge. Each data
source will likely have its own access restrictions and
security policies, making it difficult to balance appropriate
security for all data sources with the need to aggregate and
extract meaning from the data.
Big data is an evolving term that describes any
voluminous amount of structured, semi-structured and
unstructured data that has the potential to be mined for
information. The 3Vs that define Big Data are Variety,
Velocity and Volume.
1) Volume: There has been an exponential growth in the
volume of data that is being dealt with. Data is not just in the
form of text data, but also in the form of videos, music and
large image files. Data is now stored in terms of Terabytes
and even Petabytes in different enterprises. With the growth
of the database, we need to re-evaluate the architecture and
applications built to handle the data.
2) Velocity: Data is streaming in at unprecedented speed and
must be dealt with in a timely manner. RFID tags, sensors
and smart metering are driving the need to deal with torrents
of data in near-real time. Reacting quickly enough to deal
with data velocity is a challenge for most organizations.
3) Variety: Today, data comes in all types of formats.
Structured, numeric data in traditional databases.
Information created from line-of-business
applications. Unstructured text documents, email, video,
audio, stock ticker data and financial transactions. We need
to find ways of governing, merging and managing these
diverse forms of data.
As computer applications were developed to handle
financial and personal data, the need for security is
necessary. The data on computers is an extremely important
aspect for processing and transmitting secure data in
various applications. Security is the process of preventing
and detecting unauthorized use of computer or network.
Prevention measures help us to stop unauthorized users
Abhinandan Banik
IBM India Pvt. Ltd.
Samir Kumar Bandyopadhyay
Department of Computer Science and Engineering,
University of Calcutta
Kolkata, India
2. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 160
from accessing any part of computer system. Detection
helps to determine whether or not someone attempted to
break into the system. The goal of cryptography is to make
it possible for two people can exchange a message in such
a way that other people cannot understand the message.
There is no end to the number of ways this can be done, but
here we will be concerned with methods of altering the text
in such a way that the recipient can undo the alteration and
discover the original text.
2. REVIEW WORKS
Big data is a large set of unstructured data even more
than tera and peta bytes of data. Big Data[1] can be of only
digital one. Data Analysis become more complicated
because of their increased amount of unstructured or semi-
structured data set. Predictions, analysis, requirements etc.,
are the main things that should be done using the
unstructured big data. Big data is a combination of three v’s
those are namely Volume, Velocity and Variety. Big data
will basically processed by the powerful computer. But due
to some scalable properties of the computer, the processing
gets limited.
For encryption/decryption process, in modern days is
considered of two types of algorithms viz., Symmetric key
cryptography and Asymmetric key cryptography.
Symmetric key cryptography:
Symmetric-key algorithms are those algorithms that use the
same key for both encryption and decryption. Examples of
symmetric key algorithms are Data Encryption Standard
(DES) and Advanced Encryption Standard (AES).
Asymmetric key cryptography:
Asymmetric-key algorithms are those algorithms that use
different keys for encryption and decryption. Examples of
asymmetric-key algorithm are Rivest- Shamir-Adleman
(RSA) and Elliptic curve cryptography (ECC).
Big data deals with storing the data, processing the data,
retrieval of data. Many technologies are used for these
purposes just like memory management, transaction
management, virtualization and networking. Hence security
issues of these technologies are also applicable for big data.
Big data have many security issues and these issues can be
solved using some approaches like data encryption.
Data Encryption Standards or DES is a block encryption
algorithm. When 64-bit blocks of plain text go in, 64-
bitblocks of cipher text come out. It is asymmetric
algorithm, meaning the same key is used for encryption and
decryption. It uses a 64-bitkey, 56 bits make up the true key,
and 8 bits are used for parity. When the DES algorithm
is applied to data, it divides the message into blocks and
operates on them one at a time. A block is made up of 64bits
and is divided in half and each character is encrypted one at
a time. The characters are put through 16 rounds of
transposition and substitution functions. The order and
type of transposition and substitution functions depend on
the value of the key that is inputted into the algorithm. The
result is a 64-bitblock of cipher text. There are several modes
of operations when using block ciphers. Each mode specifies
how a block cipher will operate. One mode may work better
in one type of environment for specific functionality,
whereas another mode may work in a different environment
with totally different types of requirements.
The Advanced Encryption Standard or AES [6]
algorithm is also a symmetric block cipher that can encrypt
and decrypt information. This algorithm is capable of using
cryptographic keys of 128,192and 256 bits to encrypt and
decrypt data in blocks of 128 bits. The input and output for
the AES algorithm each consist of sequences of
128bits(digits with values of 0 or 1).These sequences will
sometimes be referred to as blocks and the number of bits
they contain will be referred to as their length. The bits
within such sequences will be numbered starting at zero and
ending at one less than the sequence length (block length or
key length).The number i attached to a bit is known as its
index and will be in one of the ranges between 0 to127or 0
to255 or 0 to 255 depending on the block length and key
length.
In this paper, we concentrate on developing an
innovative cryptosystem for information security which is
more advantages than existing symmetric key standards
like DES and AES in context of security.
3. PROPOSED METHOD
In a symmetric cryptosystem model we use the same key
for encryption and decryption. In this proposed model our
main key has two parts. The first part of the main key is
substitution key or sub-key and the last part is shuffling key
or suf-key. Here last 10bits of main key represents suf-key
and apart from that last 10bits rest of the part represents sub-
key. We have to remember that there is a restriction of
choosing sub-key; it can only bean 8-bit or a multiple of 8
i.e.16, 24, 32 and so on. So, main key of our proposed
model will always be sub-key +10bits,i.e.18, 26, 34,42 and
so on. Therefore this proposed model is based on
substitution-expansion-shuffling technique.
In this method first determine the sub-key from the
main key. On the basis of this sub-key, first step, method of
substitution will occur. Let take an 8-bit suf-key ,i.e.
00111010.Firstcalculate the key value to perform the
substitution. In case of 8-bit key this values can be
3. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 161
determine by calculating the decimal value of first three bits
then next and so on. In our key 1st three bits i.e. 0,1and 2
positions are 001whose corresponding decimal value is
1.Consider first value. Then calculate next three bits i.e.1,2
and 3 three bit positions i.e. 011 is 3. Consider second
value. Perform same technique until come to last bit
positioni.e.7th
position .Now, what happen if we try to take
next three bits from the 6th position where only 2-bits left
or in case of 7th position where only 1-bitleft? In such case
we take rest of the bits from the beginning e.g. for 6th
position it should be 100 and for 7th
position it is 000.In case
of 8-bits sub-key we’ll get eight key values lie in between
0 to7.Key values of our sub-key are1, 3, 7, 6,5,2,4 and 0.As
soon as we’ll get that key values we create a block of eight
rows and 1+Ncolumn (N is number of characters in the
file). Here first column represent the key
values of the sub-key and remaining columns represent the
characters of the file. Now, we convert each character to its
corresponding 8-bit binary stream(as we take 8-bit sub-
key)and put that into the block serially from top to bottom.
After the whole block is filled up it will rearrange in
ascending order from top to bottom on the basis of the key
values and thus the bits of the whole plain text will have
substituted and we will get the substituted binary bit stream.
Though we have to determine that key values therefore the
number of sub-key combinations we can generate by a
particular sub-key length is2
(Length of sub-key/2)
,i.e.for8-
bit we can generate up to 2
4
combinations of sub-key,for16-
bit we can generate 2
8
combinations of sub key and so on.
Now, key values are varies with the sub-keys, like in case
of 16-bit key it lies between 0 to15, for24-bit key it lies
between 0-23 and so on. Now we convert the characters to
corresponding binary stream of that bit which our sub-key
has, i.e. incase of16-bit key we convert the character to 16-
bit binary stream, for24-bit key we convert to 24-bit binary
stream and so on.
The next step after method of substitution is
method of expansion. As the name suggested we can guess
that there we will perform some sort of expansion. This
expansion will perform on the substituted binary bit stream
and it will also need the key values of the sub-key. Now,
key value of our sub-key is1,3,7,6,5,2,4 and 0. At the
beginning we take the first key value which is1. Now ,from
the substituted binary bit stream we will take 1st
bit because
our key value is1 and we also perform only one time
expansion as our value is1.We expand up to 8 bit in case of
8-bit sub-key. As we have 1st
bit 1 so we add another 7 bits
i.e.0101010.0101010 because last bit is1, thus we add
alternate 0 and 1.We then move to next key value which is
3,therefore we perform up to three times expansion. But we
perform expansion from the starting key value (1st
key
value to 3rd
key value) i.e.1,3 and7. We follow this
technique until the substituted binary stream found empty.
As soon as we found it is empty,we will stop expansion and
go to next step.We will expand up to8-bit to make expanded
bit stream divisible by 8.It helps to convert binary bit stream
to corresponding cipher text at the end. Apart from8-bit
substitution key we follow quite different technique for
expansion. The basic idea is same with only one change.
For rest of the sub-keys, if key values are less than the half
of the length of current sub-key we will expand it up to the
length of previous sub-key and if key values are greater
than the half of the length of current sub-key we will expand
it up to the length of current sub-key. E.g. in case of 16 bit
sub-key ifkeyvalueis6thenwewillexpandup to 8 bit, but if
key value is 10 then we will expand up to 16 bit. Now, it
can be a situation that our key value is 5 but only two bits
are left in the substituted binary bit stream for expansion. In
that case, though it can’t be a major issue in time of
expansion but it will leads to major problem at the time of
decryption as we will get some redundant values. So, we
need to keep that redundant information to get the actual
message. We keep that information as redundant bit
information at the end of expanded binary bit stream. In
case of 8-bit sub-key we can keep that information by
adding extra eight bits at the end.
Now, by 8-bit (2
8
)we can represent a number between 0
to 255,so we can use extra 8-bit to keep redundant bit
information up to 256bitssub-key as the key value lies
between 0 to255. But we can’t keep the information of the
redundant bits if the sub-key size is greater than 256.So,
solve that problem we will use8
-
bit if the sub-key is less
than or equal to2
8
,if sub-key is greater than 2
8
then we
will use 16-bit to keep that information until the length of
sub-keyisgreater2
16
andsoon.Thatiswewilluse8-bitorthe
multipleof8-bitto keep the redundant bit information and
this depends on the length of sub-key. So, the redundant
bit information will determine by 2
8n
,(here n is1,2,3and
soon) where, sub-key>2
8(n-1)
and sub-key<=2
8n
.
As the binary expansion of substituted binary bit stream
completes and redundant bit information adds we will get
the actual expanded binary bit stream. The next technique,
method of shuffling will start after that and it take expanded
binary bit stream as input. This method has two scenarios.
One what type of shuffling case it going to perform? And
how many rounds this shuffling will occur? We need10-
bitshuffling key to determine that as in our proposed model
were strict shuffling cases in four types and shuffling
rounds by 255.In this method, the four cases will occur and
these are as follows: case1.Unchanged,
case2.Swap,case3.Inverse and case4.Swap-Inverse.
Case1.Unchanged:In this case we will take expanded
binary bit stream as final cipher binary bit stream and
convert it to its corresponding cipher text.
Case2.Swap:In this case we will take expanded binary
bit stream as initial cipher binary bit stream and then
perform bit swapping on the basis of key value. Here, key
value of our sub-key was 1,3,7,6,5,2,4 and 0.
Let the expanded bit length is24 and we perform 3
shuffling rounds. At the beginning we take the first key
4. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 162
valuewhichis1.Now,in the 1
st
round we divide the expanded
binary bit stream into twoparts,thelefthalfwhichcontains1-
bit(as our key valueis1)and the right half which contain rest
23 bits and then swap them. So, it forms a new intermediate
cipher binary bit stream and we’ll take it as input for next
round.
Then, in the 2
nd
round we take the next key value which is
3, again divide it into two parts, the lefthalfwhichcontains3-
bit(as our key value is 3) and the right half which contains
rest 21 bits and then swap them. And in the last round our
key value will be 7 and we’ll divide the left half which
contains7-bit(as our key value is 7 now) and the right half
which contains 17 bits and swap them. So, after 3
rd
round
we’ll get our final cipher
binary bit stream.
Case3.Inverse: This case is similar as previous. Here we
inverse the expanded binary bit stream. If we take the
previous conditions then in the 1
st
round we divide the
expanded binary bit stream into two parts, the left half
which contains1-bit(as our key value is 1)and the right half
which containrest23bitsas previous. Then we just inverse
the rest of 23 bits and form intermediate cipher binary bit
stream.
For the 2
nd
round we divide it into the left half which
contains 3-bit (as our key value is 3) and the right half
which contain 21b its and inverse those 21 bits. And at the
end we divide it into the left half which contains 7-bit (as
our key value is 7) and the right half which contain17 bits
and inverse those 17bits.After 3
rd
round we’ll get our final
cipher binary bit stream.
Case4.Swap-Inverse: This case is nothing but combination
of previous methods i.e. Case2 and Case3. Here in each
round we perform first the swap and then inverse operation
to form the intermediate cipher binary bit stream.
The algorithm for encryption and decryption process in
presented next.
Algorithm1Encryption
Input: input file to be encrypted
K=key
Output: encrypted file
Begin
Step1.Divide the key into three parts such as
substitution key, shuffling case and shuffling rounds.
Step2.Take the all character of plain text of the file
into a string and generates corresponding
binary bit steam on the basis of the size of substitution
key
Step3.Substitute input bit stream using substitution
key and calculate the key value of that substitution
key.
Step4.Expand substituted binary bit stream on the
basis of key value and add redundant bit information
in the end of that bit stream.
Step5. Shuffle expanded binary bit stream using
shuffling case, shuffling round sand key value
information and generates final cipher bit stream.
Go to Step5 until expanded bit stream found null.
Step6. Convert the final cipher bit stream to
corresponding cipher text and write it into the
encrypted file.
End
Algorithm2 Decryption
Input: input file to be decrypted
K=key
Output: decrypted file
Begin
Step1.Dividethekey into three parts such as
substitution key, shuffling case and shuffling rounds,
also determine the key value of substitution key.
Step2.Take the all character of cipher text of the file
into a string and generates corresponding binary bit
steam on the basis of the size of substitution key.
Step3.Shuffle initial cipher bit stream using shuffling
case, shuffling rounds and key value information and
5. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 163
generates expanded plain text bit stream.
Go to Step3 until cipher bit stream found null.
Step4. Keep the information of redundant bits into an
array and compress expanded plain text bit stream into
initial plain text bit stream
Step5. Reconstitute initial plain text
bit stream using substitution key and generates
corresponding original plain text binary bit stream.
Step6. Convert original plain text binary bit stream to
corresponding cipher text and write it into the
decrypted file.
End
The length of expanded bit stream is depending
of the sub-key, therefore this expanded binary bit stream is
varies with the sub-key and it also possible that two
different sub-key producing same size of expanded binary
bit stream. This approach confuses the unauthorized users
about the size of the key. Another important technique we
inherit in our model is shuffling cases and shuffling rounds.
Here, as we stated, we use four different cases and up to255
rounds which will determine only on the basis of key. Now
the major advantage of this mechanism is we can produce
various different cipher texts from same expanded binary
bit stream by varying different the cases or the rounds.
4. CONCLUSIONS
The paper has taken review of the big data security
issues along with the basic properties of Big Data. It shows
successful achievement of the encryption and decryption of
the given text files. The later part of the paper has presented
the encryption and decryption algorithms to demonstrate the
security issues of Big Data. These encryption techniques
make data more secure. It was observed that size of text file
varies to count the effect of volume due to big data.
5. REFERENCES
[1] S.Sen, C.Shaw, R.Chowdhuri, N. Ganguly, and P.
Chaudhuri, “Cellular Automata Based
Cryptosystem(CAC)”, Fourth
International Conference on
Information and Communication Security (ICICS02), Dec.
2002, PP. 303-314.
[2] Feng. Bao, ”Cryptanalysis of a Partially
Known Cellular Automata Cryptosystem”,
VOL.53, No.11, Nov. 2004.
[3] Biham, Eli and Adi Shamir, Differential
Cryptanalysis of the Data Encryption Standard,
Springer Verlag, 1993.
[4] Coppersmith, D. "The Data Encryption
Standard(DES) and Its Strength Against
Attacks." IBM Journal of
Research and Development,
May 1994,pp. 243 - 250.
[5] K. Naik, D. S.L. Wei,
Software Implementation Strategies for Power-
Conscious Systems,” Mobile Networks and
Applications -6, 291-305, 2001.
[6] Farina, A., "Simultaneous Measurement of
Impulse Response and Distortion with a Swept-
Sine Technique", presented at the AES 108th
Convention, Paris, France, 2002 February 19
-22.
[7] N. Tippenhauer, C. Pöpper, K. Rasmussen, S.
Capkun, “On the requirements for successful
GPS spoofing attacks,” in Proceedings of the
18th ACM conference on Computer and
communications security, pp. 75-86, Chicago,
IL, 2011.
[8] P. Gilbert, J. Jung, K. Lee, H. Qin, D. Sharkey, A.
Sheth, L. P. Cox, “YouProve: Authenticity and Fidelity in
Mobile Sensing,” ACM SenSys 2011, Seattle, WA,
November, 2011.
[9] B. Levine, C. Shields, N. Margolin, “A Survey of
Solutions to the Sybil Attack,” Tech report 2006-052,
University of Massachusetts Amherst, Amherst, MA,
October 2006.
[10] B. Agreiter, M. Hafner, and R. Breu, “A Fair Non-
repudiation Service in a Web Service Peer-to-Peer
6. International Journal of Computer Applications Technology and Research
Volume 5– Issue 3, 159 - 164, 2016, ISSN:- 2319–8656
www.ijcat.com 164
Environment,” Computer Standards & Interfaces, vol 30,
no 6, pp. 372-378, August 2008.
[11] A. Anagnostopoulos, M. T. Goodrich, and R.
Tamassia, “Persistent Authenticated Dictionaries
and Their Applications,” in Proceedings of the 4th
International Conference on Information Security,
pp. 379-393, October, 2001.