This document discusses the need for a framework to enable interoperability between heterogeneous cloud infrastructures and systems. It proposes representing data and computation semantically so they can be transmitted and executed across different environments. It also emphasizes the importance of analyzing system behavior and performance to achieve accountability and manage privacy, security, and latency requirements in distributed cloud systems.
XML is expected to facilitate Internet B2B messaging because of its simplicity and flexibility. One big concern that customer may have in doing Internet B2B messaging is security. Therefore considering some security features in XML such as element-wise encryption, access control and digital signature that are beyond the capability of the transport-level security protocol such as SSL is of interest. We describe element-wise encryption of XML documents by performing some cryptographic transformations on it. For this reason, XSLT (Extensible Stylesheet Language Transformations) may well have sufficient functionality to perform all reasonable cryptographic transformations.
In this paper we implement element wise encryption operation in the document using XSLT. Extension functions of XSLT are made use to enhance the abilities of XSLT to include the encryption and decryption functions.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
XML is expected to facilitate Internet B2B messaging because of its simplicity and flexibility. One big concern that customer may have in doing Internet B2B messaging is security. Therefore considering some security features in XML such as element-wise encryption, access control and digital signature that are beyond the capability of the transport-level security protocol such as SSL is of interest. We describe element-wise encryption of XML documents by performing some cryptographic transformations on it. For this reason, XSLT (Extensible Stylesheet Language Transformations) may well have sufficient functionality to perform all reasonable cryptographic transformations.
In this paper we implement element wise encryption operation in the document using XSLT. Extension functions of XSLT are made use to enhance the abilities of XSLT to include the encryption and decryption functions.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
Hi! Take a look at this awesome computer science dissertation literature review example. If you want to see more visit https://www.literaturereviewwritingservice.com/how-to-conduct-a-computer-science-literature-review/
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
A Comparative Study of RDBMs and OODBMs in Relation to Security of Datainscit2006
Mansaf Alam and Siri Krishan Wasan
Department of Computer Sciences, Jamia Millia Islamia, New Delhi, India.
Department of Mathematics, Jamia Millia Islamia, New Delhi, India.
survey paper on object oriented cryptographic security for runtime entitiesINFOGAIN PUBLICATION
With the advent of complex systems the need for large data storage with less space utility & high performance have become the vital features. Another important concern of the data is the security which is assured via the cryptographic techniques implemented at all levels of data storage. In this survey paper we introduce the concept of security between two hierarchical data accesses and propose the concept of hierarchical cryptography between data of different classes of different hierarchies.
Abstract: Every program whether in c, java or any other language consists of a set of commands which are based on the logic behind the program as well as the syntax of the language and does the task of either fetching or storing the data to the computer, now here comes the role of the word known as “data structure”. In computer science, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Data structures provide a means to manage large amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key in designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. Now as different data structures are having their different usage and benefits, hence selection of the same is a task of importance. “Therefore the paper consists of the basic terms and information regarding data structures in detail later on will be followed by the practical usage of different data structures that will be helpful for the programmer for selection of a perfect data structure that would make the programme much more easy and flexible.
Keywords: Data structures, Arrays, Lists, Trees.
Automated hierarchical classification of scanned documents using convolutiona...IJECEIAES
This research proposed automated hierarchical classification of scanned documents with characteristics content that have unstructured text and special patterns (specific and short strings) using convolutional neural network (CNN) and regular expression method (REM). The research data using digital correspondence documents with format PDF images from Pusat Data Teknologi dan Informasi (Technology and Information Data Center). The document hierarchy covers type of letter, type of manuscript letter, origin of letter and subject of letter. The research method consists of preprocessing, classification, and storage to database. Preprocessing covers extraction using Tesseract optical character recognition (OCR) and formation of word document vector with Word2Vec. Hierarchical classification uses CNN to classify 5 types of letters and regular expression to classify 4 types of manuscript letter, 15 origins of letter and 25 subjects of letter. The classified documents are stored in the Hive database in Hadoop big data architecture. The amount of data used is 5200 documents, consisting of 4000 for training, 1000 for testing and 200 for classification prediction documents. The trial result of 200 new documents is 188 documents correctly classified and 12 documents incorrectly classified. The accuracy of automated hierarchical classification is 94%. Next, the search of classified scanned documents based on content can be developed.
Utility privacy tradeoff in databases an information-theoretic approachIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Data Integration in Multi-sources Information Systemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
See the style transformation of the internet business guru, public speaker and mogul and how an investment in fashion earned him over a half of a million dollars in income.
I grew in design as an extent of my exploration of artistic expression, I found design and photography both a challenging and exhilarating medium of art and I love to use it to show the world through my eyes
Hi! Take a look at this awesome computer science dissertation literature review example. If you want to see more visit https://www.literaturereviewwritingservice.com/how-to-conduct-a-computer-science-literature-review/
An Efficient PDP Scheme for Distributed Cloud StorageIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
A Comparative Study of RDBMs and OODBMs in Relation to Security of Datainscit2006
Mansaf Alam and Siri Krishan Wasan
Department of Computer Sciences, Jamia Millia Islamia, New Delhi, India.
Department of Mathematics, Jamia Millia Islamia, New Delhi, India.
survey paper on object oriented cryptographic security for runtime entitiesINFOGAIN PUBLICATION
With the advent of complex systems the need for large data storage with less space utility & high performance have become the vital features. Another important concern of the data is the security which is assured via the cryptographic techniques implemented at all levels of data storage. In this survey paper we introduce the concept of security between two hierarchical data accesses and propose the concept of hierarchical cryptography between data of different classes of different hierarchies.
Abstract: Every program whether in c, java or any other language consists of a set of commands which are based on the logic behind the program as well as the syntax of the language and does the task of either fetching or storing the data to the computer, now here comes the role of the word known as “data structure”. In computer science, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Data structures provide a means to manage large amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key in designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. Now as different data structures are having their different usage and benefits, hence selection of the same is a task of importance. “Therefore the paper consists of the basic terms and information regarding data structures in detail later on will be followed by the practical usage of different data structures that will be helpful for the programmer for selection of a perfect data structure that would make the programme much more easy and flexible.
Keywords: Data structures, Arrays, Lists, Trees.
Automated hierarchical classification of scanned documents using convolutiona...IJECEIAES
This research proposed automated hierarchical classification of scanned documents with characteristics content that have unstructured text and special patterns (specific and short strings) using convolutional neural network (CNN) and regular expression method (REM). The research data using digital correspondence documents with format PDF images from Pusat Data Teknologi dan Informasi (Technology and Information Data Center). The document hierarchy covers type of letter, type of manuscript letter, origin of letter and subject of letter. The research method consists of preprocessing, classification, and storage to database. Preprocessing covers extraction using Tesseract optical character recognition (OCR) and formation of word document vector with Word2Vec. Hierarchical classification uses CNN to classify 5 types of letters and regular expression to classify 4 types of manuscript letter, 15 origins of letter and 25 subjects of letter. The classified documents are stored in the Hive database in Hadoop big data architecture. The amount of data used is 5200 documents, consisting of 4000 for training, 1000 for testing and 200 for classification prediction documents. The trial result of 200 new documents is 188 documents correctly classified and 12 documents incorrectly classified. The accuracy of automated hierarchical classification is 94%. Next, the search of classified scanned documents based on content can be developed.
Utility privacy tradeoff in databases an information-theoretic approachIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Data Integration in Multi-sources Information Systemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
See the style transformation of the internet business guru, public speaker and mogul and how an investment in fashion earned him over a half of a million dollars in income.
I grew in design as an extent of my exploration of artistic expression, I found design and photography both a challenging and exhilarating medium of art and I love to use it to show the world through my eyes
Java Abs Peer To Peer Design & Implementation Of A Tuple Sncct
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Java Abs Peer To Peer Design & Implementation Of A Tuple Spacencct
Final Year Projects, IEEE Projects, Final Year Projects in Chennai, Final Year IEEE Projects, final year projects, college projects, student projects, java projects, asp.net projects, software projects, software ieee projects, ieee 2009 projects, 2009 ieee projects, embedded projects, final year software projects, final year embedded projects, ieee embedded projects, matlab projects, microcontroller projects, vlsi projects, dsp projects, free projects, project review, project report, project presentation, free source code, free project report, Final Year Projects, IEEE Projects, Final Year Projects in Chennai, Final Year IEEE Projects, final year projects, college projects, student projects, java projects, asp.net projects, software projects, software ieee projects, ieee 2009 projects, 2009 ieee projects, embedded projects, final year software projects, final year embedded projects, ieee embedded projects, matlab projects, final year java projects, final year asp.net projects, final year vb.net projects, vb.net projects, c# projects, final year c# projects, electrical projects, power electronics projects, motors and drives projects, robotics projects, ieee electrical projects, ieee power electronics projects, ieee robotics projects, power system projects, power system ieee projects, engineering projects, ieee engineering projects, engineering students projects, be projects, mca projects, mtech projects, btech projects, me projects, mtech projects, college projects, polytechnic projects, real time projects, ieee projects, non ieee projects, project presentation, project ppt, project pdf, project source code, project review, final year project, final year projects
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
Re-Engineering Databases using Meta-Programming TechnologyGihan Wikramanayake
G N Wikramanayake (1997) "Re-engineering Databases using Meta-Programming Technology" In:16th National Information Technology Conference on Information Technology for Better Quality of Life Edited by:R. Ganepola et al. pp. 1-14. Computer Society of Sri Lanka, Colombo: CSSL Jul 11-13, ISBN 955-9155-05-9
Semantics in Financial Services -David NewmanPeter Berger
David Newman serves as a Senior Architect in the Enterprise Architecture group at Wells Fargo Bank. He has been following semantic technology for the last 3 years; and has developed several business ontologies. He has been instrumental in thought leadership at Wells Fargo on the application of Semantic Technology and is a representative of the Financial Services Technology Consortium (FSTC)on the W3C SPARQL Working Group.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Implementation of Agent Based Dynamic Distributed ServiceCSCJournals
The concept of distributed computing implies a network / internet-work of independent nodes which are logically configured in such a manner as to be seen as one machine by an application. They have been implemented in many varying forms and configurations, for the optimal processing of data. Agents and multi-agent systems are useful in modeling complex distributed processes. They focus on support for (the development of) large-scale, secure, and heterogeneous distributed systems. They are expected to abstract both hardware and software vis-à-vis distributed systems. For optimizing the use of the tremendous increase in processing power, bandwidth, and memory that technology is placing in the hands of the designer, a Dynamically Distributed Service (to be positioned as a service to a network / internet-work) is proposed. The service will conceptually migrate an application on to different nodes. In this paper, we present the design and implementation of an inter-mobility (migration) mechanism for agents. This migration is based on FIPA ACL messages. We also evaluate the performance of this implementation.
Activity Context Modeling in Context-AwareEditor IJCATR
The explosion of mobile devices has fuelled the advancement of pervasive computing to provide personal assistance in this
information-driven world. Pervasive computing takes advantage of context-aware computing to track, use and adapt to contextual
information. The context that has attracted the attention of many researchers is the activity context. There are six major techniques that
are used to model activity context. These techniques are key-value, logic-based, ontology-based, object-oriented, mark-up schemes and
graphical. This paper analyses these techniques in detail by describing how each technique is implemented while reviewing their pros
and cons. The paper ends with a hybrid modeling method that fits heterogeneous environment while considering the entire of modeling
through data acquisition and utilization stages. The modeling stages of activity context are data sensation, data abstraction and
reasoning and planning. The work revealed that mark-up schemes and object-oriented are best applicable at the data sensation stage.
Key-value and object-oriented techniques fairly support data abstraction stage whereas the logic-based and ontology-based techniques
are the ideal techniques for reasoning and planning stage. In a distributed system, mark-up schemes are very useful in data
communication over a network and graphical technique should be used when saving context data into database.
In this paper we describe NoSQL, a series of non-relational database
technologies and products developed to address the current problems the
RDMS system are facing: lack of true scalability, poor performance on high
data volumes and low availability. Some of these products have already been
involved in production and they perform very well: Amazon’s Dynamo,
Google’s Bigtable, Cassandra, etc. Also we provide a view on how these
systems influence the applications development in the social and semantic Web
sphere.
In this paper we describe NoSQL, a series of non-relational database technologies and products developed to address the current problems the RDMS system are facing: lack of true scalability, poor performance on high data volumes and low availability. Some of these products have already been involved in production and they perform very well: Amazon’s Dynamo, Google’s Bigtable, Cassandra, etc. Also we provide a view on how these systems influence the applications development in the social and semantic Web sphere.
Resist Dictionary Attacks Using Password Based Protocols For Authenticated Ke...IJERA Editor
A parallel file system is a type of distributed file system that distributes file data across multiple servers and
provides for concurrent access by multiple tasks of a parallel application. In many to many communications or
multiple tasks, key establishments are a major problem in parallel file system. So we propose a variety of
authenticated key exchange protocols that are designed to address the above issue. In this paper, we also study
the password-based protocols for authenticated key exchange (AKE) to resist dictionary attacks. Password-based
protocols for authenticated key exchange (AKE) are designed to work to resist the use of passwords drawn from
a space so small that attacker might well specify, off line, all possible passwords. While many such protocols
have been suggested, the elemental theory has been lagging. We commence by interpreting a model for this
problem, to approach password guessing, forward secrecy, server compromise, and loss of session keys.
Foundation models are pre-trained models that can be fine-tuned on specific tasks or domains. These highly adaptable and high-performing models find applications across diverse domains, including Natural Language Processing (NLP), computer vision, and multimodal tasks.
Similar to Data and Computation Interoperability in Internet Services (20)
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Data and Computation Interoperability in Internet Services
1. Data and Computation Interoperability in Internet Services
Sergey Boldyrev, Dmitry Kolesnikov, Ora Lassila, Ian Oliver
Nokia Corporation
Espoo, Finland
e-mail: {sergey.boldyrev, dmitry.kolesnikov, ora.lassila ian.oliver}@nokia.com
Abstract— The rapid shift of technology towards
cloud computing requires a next generation dis-
tributed systems that should be seamlessly spanned
around heterogeneous concepts of the information
providers, cloud infrastructures, and terminal equip-
ments. The framework for interoperability is required;
a detailed methods and set of supporting tools for
developing an end-to-end architecture where aspects
of data and computation semantics, scalability, se-
curity and privacy are resolved and leveraged in
order to provide a end-to-end foundation of such
systems. Our objectives are to define the scope of
the framework and to identify technology choices
from perspective of scalable technology and busi-
ness models with high elasticity.
Keywords- service; semantic web; data; ontology;
computation; privacy; latency; cloud computing; in-
teroperability;
I. INTRODUCTION
The ICT industry is undergoing a major shift
towards cloud computing. From the technologi-
cal standpoint, this entails incorporating both “in-
house” and outsourced storage and computing
infrastructures, integrating multiple partners and
services, and providing users a seamless expe-
rience across multiple platforms and devices. On
the business side, there is an anticipation of im-
proved flexibility, including the flexibility of introduc-
ing new, multisided business models where value
can be created through interactions with multiple
(different types of) players. Many companies are
trying to answer questions like “Who might find this
information valuable?”, “What would happen if we
provided our service for free of charge”, and “What
if my competitor(s) did so?”. The answers to such
questions are hightlighting opportunities for disrup-
tion and identification of vulnerabilities [13]. Thus,
slicing and investigating technological components
we are seeking the answers in further thinking and
collaboration work with industrial researches and
academic society.
In a world of fully deployed and available
cloud infrastructures we can anticipate the emer-
gence of distributed systems that can seamlessly
span the “information spaces” of multiple hardware,
software, information and infrastructure providers.
Such systems can take advantage of finely granular
and accountable processing services, can integrate
heterogeneous information sources by negotiat-
ing their associated semantics, and ultimately free
users of mundane challenges and details of tech-
nology usage (such as connectivity, locus of data
or computation, etc.). We foresee greater reuse of
information and computational tasks.
Some of the inputs of this work have been the
work of Marco Rodriguez about the RDF virtual
machine, the Haskell state monads and various
papers about out of order execution like [5], [6] The
objective was to realize and to describe semanti-
cally a scenario in which computation can be repre-
sented in a machine understandable way and sent
to another execution environment where, using the
semantic, it can be reconstructed and computed.
The computations target are not low level machine
level instructions like in Rodriguez work, nor simply
functions, but a particular kind of function that can
be executed or put in chain with other functions of
the same type like it happens with monadic values
that can be put in chain with monadic operators like
2. “bind”. Another important objective of the semantic
representation of computation in this work was to
realize a system in which computations originated
by a certain HW/SW combination, can be executed
independently from the executor operating system
or hardware architecture. This new computation
paradigm requires explicit techniques to facilitate
a computation management within distributed envi-
ronments. Non-functional aspects such us privacy,
security and performance needs to be controlled
using the behavior analysis as the system sustain
load. As such design and best practices for perfor-
mance optimizations within network and software
components is widely studies by multiple projects
[9], [10], [11], [12], etc. The computation man-
agement targets are not low level optimization of
protocol or software stack behavior but accountable
mechanism enabling orchestration of computations
across boundaries of environments.
In order for us to understand the opportunities
and – perhaps more importantly – the challenges of
such a future, we must establish a clear framework
for interoperability. Aspects of data and compu-
tation semantics, performance and scalability of
computation and networking, as well as threats
to security and privacy must be discussed and
elaborated. This paper aims to be the beginning
of this discussion.
We begin in Section 2 with informal definition
of data semantics via set of declared dependen-
cies and constrains used to reflect data processing
rules. Section 3 presents a computations in a form
of accountable components that can be leveraged
and reused in a larger, distributed systems. Section
4 discusses the role of system behavior analysis
in order to achieve adequate level of system ac-
countability. Finally Section 4, privacy enforcement
are discussed by which ownership, granularity and
content semantic are managed.
II. DATA
A. About Semantics
In the broader sense, in the context of data,
the term semantics is understood as the “meaning”
of data. In this document, we take a narrow, more
pragmatic view, and informally define semantics
to be the set of declared dependencies and con-
straints that define how data is to be processed. In
a contemporary software system, some part of the
system (some piece of executable software) always
defines the semantics of any particular data item.
In a very practical sense, the “meaning” of a data
item arises from one of two sources:
1) The relationships this item has with other
data items and/or definitions (including the
semantics of the relationships themselves). In
this case, software “interprets” what we know
about the data item (e.g., the data schema
or ontology). An elementary example of this
is object-oriented polymorphism, where the
runtime system can pick the right software to
process an object because the class of the
object happens to be a subclass of a “known”
class; more elaborate cases include ontology-
based reasoning that infers new things about
an object.
2) Some software (executing in the system run-
time) that “knows” how to process this data
item directly (including the system runtime
itself, as this typically “grounds” the semantics
of primitive datatypes, etc.). In this case, the
semantics is “hard-wired” in the software.
All software systems have semantics, whether the
semantics is defined implicitly or explicitly. As for
data specifically, the semantics are typically de-
fined explicitly, in the form of datatype definitions,
class definitions, ontology definitions, etc. (these
fall under category #1 above; semantics of those
things that fall under category #2 are typically either
implicit or in some cases explicit in the form of
external documentation).
3. B. Constituent Technologies of “Data Infrastruc-
ture”
From a practical standpoint, all applications
need to query, transmit, store and manipulate data.
All these functions have dependencies to tech-
nologies (or categories of technologies) which we
in aggregate shall call Data Infrastructure. The
constituent technologies themselves have various
dependencies, illustrated (in a simplified form) in
Figure 1. From the data semantics standpoint, we
are mostly here focused on the technologies inside
the box labeled “Data Modeling Framework”.
Application
Data Modeling Framework
Storage Infrastructure
Internal Data
Domain Models
Object Model
Queries
Query Language Serialization
Transmission
Schema Language
Query Engine Storage
Figure 1. Dependencies in “Data Infrastructure”
Domain Models: Application semantics is
largely defined via a set of domain models which
establish the necessary vocabulary of the (real-
world) domain of an application. Meaningful inter-
operability between two systems typically requires
the interpretation of such models.
Schema Language: Domain models are ex-
pressed using some schema language. Typically
such languages establish a vocabulary for dis-
cussing types (classes), their structure and their
relationship with other types. Examples of schema
languages include SQL DDL, W3C’s OWL and XML
Schema as well as Microsoft’s CSDL. Class defi-
nitions in object-oriented programming languages
(e.g., Java) can also be considered as schema
languages.
Query Language: An application’s queries are
expressed in some query language; this language
may either be explicit (such as SQL or SPARQL)
or implicit (in the sense that what effectively are
queries are embedded in a programming lan-
guage). Queries are written using vocabularies es-
tablished by a query language (the “mechanics” of
a query) and relevant domain models.
Serialization: For the purposes of transmission
(and sometimes also for storage) a “syntactic”,
external form of data has to be established. A stan-
dardized serialization syntax allows interoperating
systems to exchange data in an implementation-
neutral fashion. Serialization syntaxes do not have
to be dependent on domain models.1
Object Model: Ultimately, there are some def-
initions about the structure common to all data;
we refer to these definitions as an object model
(in some literature and in other contexts, the term
metamodel is also used). Essentially, an object
model is the definition of “what all data looks like”
and thus forms the basis for a schema language,
a query language and a serialization syntax. Ex-
amples of object models include W3C’s XML DOM
and RDF.2
C. On Metadata and Provenance
Interoperable exchange of data can often ben-
efit from metadata to describe the data being inter-
changed. Typically, such exchange occurs in use
cases involving management of data that originates
in multiple sources (and may – independently of
sources – have multiple owners) and/or data that
1In practice, developers often have a hard time separating
conceptual views of data, domain models, and like, from con-
crete, syntactic manifestations of data. This has lead to all kinds
of unnecessary dependencies in the implementations of large
systems. Consider yourselves warned.
2We are referring to RDF’s graph-based meta-model, not RDF
Schema which should be considered a schema language.
4. is subject to one or more policies (security, privacy,
etc.). Much of the meta-information we aspire to
record about data is in various ways associated
with data provenance. Over the years lots of work
has been invested in provenance in various ways
– [2] provides a general overview of these efforts
in the database field. In cases where a multi-cloud
solution is used to implement workflows that pro-
cess data, we may also be interested in solutions
for workflow provenance.
In practice, incorporating all or even any meta-
information into one’s (domain) data schema or
ontology is difficult, and easily results in a very
complicated data model that is hard to understand
and maintain. It can, in fact, be observed that
provenance-related metadata attributes (such as
source and owner) are cross-cutting aspects of
associated data, and by and large are independent
of the domain models in question. The problem is
similar to the problem of cross-cutting aspects in
software system design, a situation3
that has led to
the development of Aspect-Oriented Programming
[1].
From the pragmatic standpoint, the granularity
of data described by specific metadata is an impor-
tant consideration: Do we record metadata at the
level of individual objects (in which case “mixing”
metadata with the domain schema is achievable in
practice), or even at finer (i.e., “intra-object”) gran-
ularity; a possible solution to the latter is presented
in [4].
III. COMPUTATION
From the computation perspective we are
faced with a task to enable and create consistent
(from the semantics standpoint) and accountable
components that can be leveraged and reused
in a larger, distributed information system. This
translates to several pragmatic objectives, including
(but not limited to) the following:
3The motivating situation is sometimes referred to as the
“tyranny of dominant decomposition” [3].
• Design and implementation of scenarios where
computation can be described and repre-
sented in a system-wide understandable way,
enabling computation to be “migrated” (trans-
mitted) from one computational environment
to another. This transmission would involve
reconstruction of the computational task at the
receiving end to match the defined semantics
of task and data involved. It should be noted
that in scenarios like this the semantics of
computation are described at a fairly high,
abstract level (rather than, say, in terms of low-
level machine instructions).
• Construction of a system where larger compu-
tational tasks can be decomposed into smaller
tasks or units of computation, independent of
what the eventual execution environment(s) of
these tasks (that is, independent of hardware
platforms or operating systems).
Traditional approaches to “mobile code” have
included various ways to standardize the runtime
execution environment for code, ranging from hard-
ware architectures and operating systems to vari-
ous types of virtual machines. In a heterogeneous
configuration of multiple device and cloud envi-
ronments, these approaches may or may not be
feasible, and we may want to opt for a very high
level (possibly purely declarative) description of
computation (e.g., describing computation as a set
of goals to be satisfied).
High-level, declarative descriptions of com-
putation (e.g., functional programming languages
such as Erlang) afford considerable latitude for
the target execution environment to decide how
computation is to be organized and effected. This
way, computation can be scheduled optimally with
respect to various constraints of the execution en-
vironment (latency budget, power or other resource
consumption, remaining battery level, etc.) and de-
cisions can also be made to migrate computation
to other environments if needed.
5. IV. MANAGED CLOUD PERFORMANCE
A new outlined paradigm of distributed sys-
tems requires an explicit orchestration of com-
putation across Edge, Cloud and Core depend-
ing on components states and resource demand,
through the proper exploit of granular and therefore
accountable mechanisms. The behavior analysis
of the software components using live telemetry
gathered as the cloud sustain production load is
mandatory.
End-to-end latency is seen by us as a major
metric for quality assessment of software architec-
ture and underlying infrastructure. It yields both
user-oriented SLA, the objectives for each Cloud
stage. The ultimate goal is the ability to quanti-
tatively evaluate and trade-off software quality at-
tributes to ensure competitive end-to-end latency
of Internet Services.
On another hand, the task of performance
analysis is to design systems as cost effectively as
possible with a predefined grade of service when
we know the future traffic demand and the capacity
of system elements. Furthermore, it is the task to
specify methods for controlling that the actual grade
of service is fulfilling the requirements, and also
to specify emergency actions when systems are
overloaded or technical faults occur.
When applying Managed Cloud Performance
in practice, a series of decision problem concerning
both short-term and long-term arrangements ap-
pears:
• provide granular latency control for diverse
data and computational load in hybrid Cloud
architectures;
• resolution of short term capacity decisions in-
clude for example the determination of optimal
configuration, the number of lives servers or
migration of computation;
• resolution of long term capacity decisions such
as decisions concerning the development and
extension of data- and service architecture,
choice of technology, etc;
• control of SLA at different levels, e.g. consumer
services, platform components, etc;
If performance metrics of a solution is inad-
equate then business is impacted. It has been
reported by multiple sources [11]: 100ms of la-
tency on Amazon cost them 1% in sales; Goldman
Sachs makes profit of a 500 ms trading advantage.
Goldman Sachs used Erlang for the high-frequency
trading programs.
The role of software behavioral analysis there-
fore is crucial in order to achieve proper level of
accountability of the whole system, constructed out
of the number of the granular components of Data
and Computation, thus, creating a solid fundament
for the Privacy requirements.
V. PRIVACY IN THE CLOUD
The scope of privacy enforcement being pro-
posed here is to provide mechanisms by which
users can finely tune the ownership, granularity,
content, semantics and visibility of their data to-
wards other parties. We should note that security
is very closely linked with privacy; security, how-
ever, tends to be about access control, whether
an agent has access to a particular piece of data,
whereas privacy is about constraints on the usage
of data. Because we have not had the technological
means to limit usage, we typically cast privacy
as an access control problem; this precludes us
from implementing some use cases that would be
perfectly legitimate, but because the agent cannot
access relevant data we cannot even discuss what
usage is legal and what is not. In other words,
security controls whether a particular party has
access to a particular item of data, while privacy
controls whether that data can or will be used for
a particular purpose.
As Nokia builds its infrastructure for data, the
placement and scope of security, identity and anal-
ysis of that data become imperative. We can think
of the placement of privacy as described in Figure
2 (note that in this diagram, by “ontology” we really
mean semantics).
Given any data infrastructure we require at
6. minimum a security model to support the integrity
of the data, an identity model to denote the owner
or owners of data (and ideally also provenance of
the data) and an ontology to provide semantics or
meaning to the data. Given these we can construct
a privacy model where the flow of data through
the system can be monitored and to establish
boundaries where the control points for the data
can be placed.
Figure 2. Privacy, overview
VI. CONCLUSIONS
As stated in above, the rapidly shifting tech-
nology environment raises serious questions for
executives about how to help their companies to
capitalize on the transformation that is underway.
Three trends – anything as a service, multisided
business models and innovation from the bottom
of the pyramid – augur far-reaching changes in
the business environment and require radical shifts
in strategy. The fine-grained (and thus easily ac-
countable) software frameworks allow customers
to monitor, measure, customize, and bill for asset
use at much more fine-grained level than before.
Particularly, b2b customers would benefit since
such granular services would allow the purchasing
of fractions of service and to account for them as
a variable cost rather than consider some capital
investments and corresponding overhead.
Therefore, the driving agenda above can be
seen as a creation of tools for the next devel-
oper and consumer applications, computing plat-
form and backend infrastructure, backed up by
application technologies to enable qualitative leap
in services’ offering, from the perspectives of scal-
able technology and business models with high
elasticity, supporting future strategic growth areas
and maintenance requirements.
REFERENCES
[1] G. Kiczales, “Aspect-Oriented Programming,” ACM
Computing Surveys, vol. 28, no. 4, 1996.
[2] P. Buneman and W.-C. Tan, “Provenance in
databases,” in SIGMOD ’07: Proceedings of the 2007
ACM SIGMOD international conference on Manage-
ment of data. ACM, 2007, pp. 1171–1173.
[3] H. Ossher and P. Tarr, “Multi-Dimensional Sep-
aration of Concerns in Hyperspace,” in Proceed-
ings of Aspect-Oriented Programming Workshop at
ECOOP’99, C. Lopes, L. Bergmans, A. Black, and
L. Kendall, Eds., 1999, pp. 80–83.
[4] O. Lassila, M. Mannermaa, I. Oliver, and M. Sabbouh,
“Aspect-Oriented Data,” accepted submission to the
W3C Workshop on RDF Next Steps, World Wide Web
Consortium, June 2010.
[5] M. A. Rodriguez, “The RDF virtual machine,” in
Knowledge-Based Systems, Volume 24, Issue 6, Au-
gust 2011, pp 890-903.
[6] U. Reddy, “Objects as closures: abstract semantics of
object-oriented languages,” Published in: Proceedings
of the 1988 ACM conference on LISP and functional
programming , Proceeding LFP ’88, ACM New York,
NY, USA 1988, pp 289-297.
[7] QtClosure project https://gitorious.org/qtclosure, re-
trieved Oct. 2011
[8] SlipStream project https://github.com/verdyr/SlipStream,
retrieved Oct. 2011
[9] Cisco System Inc., Design Best Practices for Latency
Optimization, White Paper
[10] S. Souder, High Performance Web Sites, O’Relly,
2007, ISBN 0-596-52930-9
7. [11] The Psychology of Web Performance,
http://www.websiteoptimization.com/speed/tweak/psychology-
web-performance, retrieved Jun. 2011
[12] L. Kleinrock, Queueing Systems, Wiley Interscience,
1976., vol. II: Computer Applications (Published in
Russian, 1979. Published in Japanese, 1979.)
[13] McKinsey Column, “Clouds, big data, and smart
assets,” Financial Times, Septemeber 22 2010.