2. Writing of an essaymust be among the various challenges which students have to undergo before
they finally graduate. Essays are also most frequently assigned for homework therefore students
must ensure that their essay writing prowess is uptight in order to get best grades. Mastering the
art of essaywriting is however not difficult. Students can take lessons from some of the top essay
writing service providers such as assignments help. Below is a short description of the general
essay writing steps and an essays standard structure.
Steps for essay writing
Select a reliable topic. Topic selection is very critical and students are normally expected
to choose topics in which they have commendable understanding. In the below example,
the topic selected is “Big Data Privacy”. The topic was selected based on the student’s
interest.
Undertaking in profound research. Doing research could involve reading other material
available on the topic or consulting industry experts on the current trends on the topic.
In the example provided, there was vast research done from already existing literature as
shown by the references.
Take some time to critically think about your topic. To come up with relevant content,
there is need for a student think beyond the box on what to write. To improve your
creativity, you can work from an open space, think critically as you sip some tea and cake,
generally just be in a relaxed environment.
Structure and organize what you have thought of. You may take short notes on what you
think about or draw some diagrams to represent your school of thought. This will make it
easy to come up with content that is original
Get to the grind. Once you have everything in perspective, roll up your sleeves and begin
the writing. Start with the introduction, move to the body then finish your work with a
conclusion.
Proofread and make perfect. Finishing to write is not everything. It is very critical for a
student to then start proofreading and making corrections to some of the content
written.
How far are you in this process? Need any assistance to help you come up with an even better
essay? Find us today at Assignments Aid and make an order.
3. Essay writing structure
Anessaymainlyentailsof 3partsi.e.the introduction,the mainbodyandthe conclusion.Allof whichwe
shall see their examples in the below essay.
Introduction
When writing the introduction, always have it in mind that this sectionis meant to capture the readers
attentionbyprovidinghim/herwithasummaryof whatto expectinthe whole essay.Thereforeconsider
beingfunnyandmysteriousinyourintroduction.The readerhasto yearnto read more contentthat isin
the body of your essay.
Essay body
The bodyof an essayentailsthe maincontentof the essay. Withinthe body,all paragraphsshouldhave;
a topicsentence,examples,argument/evidence,reliable structureandif possibleasub-conclusion.Below
is an image showing an analysis of the main body of an essay.
GET CHEAP ESSAY WRITINGSERVICESFROMEXPERIENCED EXPERTSAT ASSIGNMENTS
AID.
Website:http://assignmentsaid.com/
Email:support@assignmentsaid.com
4. The conclusion
Whenwritinga conclusion,there isneedtoremindthe audience of the needof the essayandthe
impactwhichthe essaymay have.Below are some of the DOs whenwritingaconclusion.
5. Big Data Privacy
GET CHEAP ESSAY WRITINGSERVICESFROMEXPERIENCED EXPERTSAT ASSIGNMENTS
AID.
Website:http://assignmentsaid.com/
Email:support@assignmentsaid.com
6. Contents
Introduction.......................................................................................................................................7
Big data infrastructure ....................................................................................................................8
Data Privacy.......................................................................................................................................9
Privacy of data in the generation stage.............................................................................................9
Restriction of access....................................................................................................................9
Falsification of data.....................................................................................................................9
Privacyin the data storage stage......................................................................................................9
Methods of privacy preservation for cloud based data ................................................................10
Encryption of attributes.............................................................................................................10
Integrity verification for storage of big data................................................................................10
Preserving privacyin the processing of data...................................................................................11
PPDP (Preserving privacy in the processing of data).....................................................................11
Extracting knowledge from the data...........................................................................................11
Conclusion .......................................................................................................................................12
References.......................................................................................................................................12
7. Introduction
Big data has beenusedto referto a collectionof unstructured,structuredandsemi-structureddatasets
whose volume, complexity and growth rate make them hard to capture, manage, process and analyze
throughthe utilizationof normal database toolsandtechnologies.Varioustypesof dataare foundinthe
form of image,video,text,logfilesfroma webpage,tweets,blogs,informationonlocation,sensordata
andvariousothers.Bigdatadoesnotonlymeanahuge volumeof data,ithasvariousotherfeatureswhich
make ita bitdifferentfromnormal large dataormassive data.There are differentdefinitionstothe term
‘big data’. According to the International Data Corporation (IDC), technologies of big data describe new
technologies and architecture designed to extract value in an economic manner from structured and
unstructureddata of big volume throughthe establishmentof huge speedmaintainedoranalysis (Gantz
and Reinsel, 2001).
Accordingtoa McKinseyreport,bigdataisdatasetswhose sizeisbeyondthe abilityof anormal database
software toolstocapture,store,controlandperformanalysis.Bigdatacanalsobe definedintermsof the
3Vs i.e.Velocity,VarietyadVolume.Volume mainlystandsforthe data size.The Velocityrepresentsthe
speedof generationof data and deliveryof datain real time.The Varietymakesthe data too huge since
data comesfromdifferentsources.The Varietyalsoshowsthatthe datais unstructured,semi-structured
or structured. Whilst the conventional data which is supported by the DBMS is structured only thus its
utilizations are limited and it is hard to use conventional data in big data. In the below table we can see
some different types of big data and their sources.
Data type Source Formats
Structured Business apps e.g. finance apps,
bioinformatics, retail etc
RDBMS, data warehouse
Semi-structured Web apps e.g. log files and
emails
CSV, XML, HTML
Unstructured data Images, Audio, blogs, tweets Text that is user generated
The above given big data definitions offer a set of tools for comparison of the emerging big data trends
with conventional analytics. Big data is generally in Petabyte and the conventional data is normally in
Gigabytes therefore big data volume is generally quite huge compared to the conventional data.
Currently,one of the biggestchallengesof bigdatais securitysince critical canbe highlymisusedwhenit
gets to the wrong hands. In sectors such as the healthcare sector, big data is being applied by different
applications.Thelarge amountsof customerdataisnormallystoredinadatawarehouse.The warehouses
are inmost cases distributedthusmakingitdifficulttohave complete control of the data and eventually
complicates the ease with which data can be protected.
Althoughbigdatacanbe usedforpurposesof understandingthe worldbetterandinnovatingindifferent
aspects of human endeavors, the huge amount of data has potentially enhancedthe breach of privacy.
For instance,Google andAmazoncanlearnpeople’sshoppingpreferencesandhabitsof browsing.Social
network sites e.g. Facebook store all information concerning a person’s life and his/her social
relationships. Video sharing websites such as YouTube recommends videos based on people’s search
history. With all the power that comes with big data, collecting, storage and reuse of personal data for
the purpose of gainingcommercial profits,have placeda threatto the privacy and securityof people.In
the year 2006, AOL releasedtwentymillionsearchqueriesfor650 usersthroughthe removal of the AOL
8. idand IP addressforpurposesof research.Ithowevertookresearchersonlyafew daysto re-identifythe
users whose data was being used. Privacy of users can be breached under different conditions i.e.
Whenpersonal informationisblendedwithexternaldatasetscouldleadtothe inference of new
factsconcerningthe users.Thosefactsmaybe secretiveandisnotmeanttobe revealedtoothers
Personal information is sometimesgatheredand utilizedto create business value. For instance,
people’s shopping habits may reveal a lot of personal information
The sensitivedataare storedandprocessedinalocationnotsecuredreliablyandleakageof data
could occur during the storage and processing stages.
Big data infrastructure
To be able to handle various big data dimensions with regards to velocity, volume and variety, there is
need to design efficient and reliable systems for processing large amounts of data which arrive at high
velocity from various sources. Big data has to undergo many stages within its life cycle i.e. generation,
storage andprocessing.Currentlydataare distributedandnew technologiesare beingcreatedforstorage
and processing of huge data repositories. For instance, cloud computing technologies e.g. Hadoop and
MapReduce are exploredforthe processingand storage of big data. The life cycle of data has beensub-
divided into the below mentioned stages;
1. Data generation. This is where data is generated/created from different sources. New
technologies have contributed to the explosion of the amount of data generated by humans
2. Data storage. This is the stage where the large datasets are stored and managed
3. Data processing. This is process of data collection, transmission, processing and extraction of
information that is useful
GET CHEAP ESSAY WRITINGSERVICESFROMEXPERIENCED EXPERTSAT ASSIGNMENTS
AID.
Website:http://assignmentsaid.com/
Email:support@assignmentsaid.com
9. The Main Body
Data Privacy
Privacy of data in the generation stage
Generationof datacan be classifiedintoactive generationof dataand passive generationof data.Active
generation of data means that the owner of the data is willing to provide the data to a 3rd
party, while
passive generation of data refers to the situation where the data are generated by owners of the data
throughan online activitye.g.browsing.Inmanysituations,the ownerof the datamaynot be aware that
the data is beingcollectedbya3rd
party.The mainchallenge forthe ownerof the dataishow he/she can
protectdata fromany 3rd
party whocouldbe willingtogatherthem.The ownerof the datawantstohide
his/her personal informationas much as he/she can and is mainly worried of the control he could have
overthe information.Itispossibletominimize privacyviolationatthe generationstagebyeitherfalsifying
the data or restricting data (Xu et al., 2014).
Restriction of access
In case the owner of the data is of the opinion that data could reveal sensitive information that is not
supposed to be shared, he/she can simply refuse to give out such information. For this,the data owner
has to adopt effective accesscontrol techniquessothat the data can be preventedfrombeingstolenby
some 3rd
party. In case the data owner is providing the data passively, some measures can be taken to
ensure privacy e.g. use of anti-tracking extensions, use of encryption tools and use of advert/script
blockers.Through the utilizationof these tools,apersonisable to reliablylimitthe accessto data that is
sensitive.Forease of use,manyof the abovementionedtoolshave beendesignedasbrowserextensions.
Falsification of data
In some situations,itisn’tpossible topreventaccessof datathatis sensitive.Forthatmatter,datacan be
distortedthroughthe use of giventoolsbeforethe dataare fetchedbysome 3rd
party. In case the data is
distorted,the true informationcannotbe easilyshown.The below mentionedtechniquescanbe usedby
the owner of the data to falsify the data.
A tool such as socketpuppet can be used for purposes of hiding online identity of individuals
through deception. Online activities of an individual are hidden through the creation of a false
identity and pretending to be another person.
Specifictoolsof securitycanbe utilizedforpurposesof hidingidentityof individualse.g.MaskMe.
It makesitpossible foruserstocreate aliasesof theirpersonalinformationsuchasemail address
or credit card number.
Privacy in the data storage stage
Storage of large volumesof dataisnot a huge problembecause of technological advancementinstorage
of data suchas the recentboomincloudcomputing.One of the majorchallengeshoweveristhe storage
of data. Incase the bigdatastorage systemisbreached,itcancause a lotof harm since people’spersonal
information can be disclosed. Thus, there is need to ensure that the data stored are protected against
such eventualities. In today’s information systems, data centers play a critical role in doing complex
commutationsandattaininghugeamountsof data.Inanenvironmentwhichisdistributed,anapplication
may need datasets from various data centers and this face the problem of protection of privacy.
10. The traditional securitymethodsof dataprotectioncanbe separatedinto4categories.Theyare file level
data securityschemes,schemesfordatasecurityat database level,securityschemesfordatasecurityat
medialevel andapplication level encryptionschemes(Honglingetal.,2015). The traditional mechanism
for protection of data security and privacy for existing storage architectures have been a very critical
research area although it may not be directly applicable to the big data analytics platforms (Singla and
Singh,2013). In response tothe 3V’snature of the bigdataanalytics,the storage infrastructureshouldbe
scalable. It should have the ability to be configured dynamically to accommodate diverse applications.
One promising technology to address these requirements is storage virtualization, enabled by the
emerging cloud computing paradigm.
Methods of privacy preservation for cloud based data
Whendataisstoredwithinthecloud,datasecurityusuallyhas3dimensionsi.e.confidentiality,availability
and integrity.Confidentialityandintegrityare directlyrelatedtodataprivacyi.e.incase data integrityor
confidentialityisbreacheditshall have adirectimpactontheprivacyofusers. A basicnecessityforstorage
system for big data is the protection of individual privacy. There are some existing mechanisms to fulfill
thatnecessity.Forinstance,asendercanencrypthis/herdatabyusingapublickeyencryptioninamanner
that only the valid recipient can decrypt the data. The techniques to preserve the user privacy for data
stored within the cloud are as follows;
Encryption of attributes
Attribute BasesEncryptionisanencryptionmethodwhichguaranteesendtoendbigdata privacywithin
a cloudstorage system.Inthismethod,accesspoliciesare definedthe ownerof the dataandencryptions
are done based on the policies formed. The data can only be decrypted by the users whose attributes
meetthe accesspoliciesdefinedbythe ownerof the data.Inthe case of bigdata, an individual mayneed
to often change the access policiesas the owner of the data may have to share the data with different
individuals or entities.
Encryption based on identity
Identity Based Encryption is a substitute of public key encryption which was proposed for purposes of
simplifyingmanagementof keysina publickeyinfrastructure thatiscertificate basedthroughthe use of
human identities such email address or IP address as public keys. To preserve the anonymity of the
receiver and the sender, the Identity Based Encryption scheme was suggested.
Homomorphicencryption
Publiccloudisoftenmore prone tobreachesinprivacydue tovirtualizationandmulti-tenancy.The cloud
usescouldshare the same physical space and inthat kindof a situationthere are veryhighchances that
data mayleak.One wayto ensure protectionof the datawithinthe cloudistoencryptthe dataandstore
them on cloud and allow the cloud to perform computations on data that is encrypted. Complete
homomorphicencryptionisthe kindof encryptionwhichenablesfunctionstobe computedondata that
has been encrypted (Gentry, 2009). Given only one encrypted message, an individual can obtain an
encryption of the message’s function through direct computation on the encryption.
Integrity verification for storage of big data
When cloud computing is utilized for storage of big data, the owner of the data loses control over the
data. The data which has been outsourcedis at risk since the cloudserver may not be fullytrusted.The
ownerof the data needstobe quite convincedthatthe datais properlystoredwithinthe cloudbasedon
11. the contract’sservice level.One wayof ensuringprivacytothe useris through the provisionof a system
withmechanismtoletdataownerverifythathisdatastoredinthe cloudisintact.Asaresult,verification
of data integrityisof utmost importance.Soas to verifythe integrityof data stored on cloud,the direct
approachistoretrieve all the datafromthe cloud.However,the huge volumeof bigdatamakesitdifficult
especially with considerations of time consumed and the overhead from communication. To solve this
issue,there are differentschemeswhichhave beendevelopedforpurposesof verifyingthe dataintegrity
withouthavingto retrieve the datafrom the cloud.In integrityverificationscheme,the cloudservercan
only offer the valid evidence of integrity of data when all the data within the system are intact. For
purposesof achievinghighlevelsof dataprotection,verificationsof integrityhave tobe done onaregular
basis.
Preserving privacy in the processing of data
Data processing privacy protection can be separated into 2 phases i.e. in the first stage the aim is to
safeguardinformationfromdisclosure thatisunsolicited since the datacollectedmayentail information
that is sensitive about the owner. In the 2nd
stage, the goal is to extract information that is meaningful
from the data without violating the owner’s privacy.
PPDP (Preserving privacy in the processing of data)
Whenprocessingdata,the data gatheredmaybe containingsensitive informationconcerningthe owner
of the data. Directlyreleasingthe informationforprocessingcouldviolate the privacyof the ownerof the
data therefore modification of data is required is such a manner that it does not disclose any personal
information concerning the owner. On the contrary, the information gathered should be useful not to
violate the intended objective of the publishing. In order to preserve user privacy, PPDP normally used
techniques of anonymization. The data is normally anonymized through the removal of identifiers and
modification of the quasi-identifiers prior to publishing or storing for further processing.
Extracting knowledge from the data
To extractuseful informationfrombigdatawithoutbreachingthe privacy,privacypreservingdatamining
techniqueshave beendevelopedtoidentifypatternsandtrendsfromdata. Those techniquescannotbe
appliedstraightawaytobigdataas bigdata may containlarge,complex anddynamicallyvaryingdata.To
handle big data in an efficient manner, those techniques should be modified, or some special set of
techniques should be used. In addition to this, those modifiedtechniques should address the privacy
concern. There are several techniques proposed to analyze large-scale and complex data. These
techniquescanbe broadlygroupedintoclustering,classificationandassociationrule based techniques.
Privacy preserving clustering – clustering is among the most popular techniques for data
processingbecause of itsprowessinthe analysisof datathat isunfamiliar.The mainideabehind
clustering is the separation of unlabeled input data into various different groups.
Privacypreservingdataclassification –classificationreferstoatechnique of identifying,towhich
predefined category a new data entrance belongs. Similar to clustering algorithm, classification
algorithms are traditionally designed to work in centralized environments. To cope up with the
demandsof bigdata,traditional classificationalgorithmswere modifiedtosuitparallel computing
environment
12. Conclusion
The amountof data are growingeverydayanditisimpossibletoimaginethe nextgenerationapplications
without producing and executing data driven algorithms. In this paper, we have conducted a
comprehensive survey on the privacy issues when dealing with big data. We have investigated privacy
challenges in each phase of big data life cycle and discussed some advantages and disadvantages of
existingprivacypreservingtechnologiesinthe contextof big data applications.A lotof workshave been
done topreserve the privacyof usersfromdatagenerationtodataprocessing,buttherestillexistseveral
open issues and challenges. Some of the future research directions for the privacy of big data are as
indicated below;
Access control and end to end communication that is secure – this mainly focusses on ensuring
that data can only be accessed by authorized users and that the end to end transfer of data is
secure
Anonymization of data - Data is anonymized by removing the personal details to preserve the
privacy of users. It indicatesthat it would not be possible to identify an individual only from the
anonymized data.
Storage that is decentralized
Data analyticsthat is distributedandreliable techniquesinmachine learning - Machine learning
and data mining need to be employed to unleash the full potential of collected data
References
Gantz, J., & Reinsel, D.(2001). The 2011 Digital University Study: Extracting Value from Chaos.
L. Xu,C. Jiang,J.Wang,J. Yuan,and Y. Ren,‘‘Informationsecurityinbigdata:Privacyanddatamining,’’in
IEEE Access, vol. 2, pp. 1149–1176, Oct. 2014.
C. Hongbing, R. Chunming, H. Kai, W. Weihong, and L. Yanyan, ‘‘Secure big data storage and sharing
scheme for cloud tenants,’’ China Commun., vol. 12, no. 6, pp. 106–115, Jun. 2015.
S. Singla and J. Singh, ‘‘Cloud data security using authentication and encryption technique,’’ Global J.
Comput. Sci. Technol., vol. 13, no. 3, pp. 2232–2235, Jul. 2013
C. Gentry, ‘‘A fully homomorphic encryption scheme,’’ Ph.D. dissertation, Dept. Comput. Sci., Stanford
Univ., Stanford, CA, USA, 2009.
Cheap Essay Writing Services