This document discusses model-driven approaches for cloud data storage. It outlines objectives to 1) characterize cloud data storage requirements using conceptual models, 2) select appropriate cloud data storage implementations and providers based on requirements, and 3) manage artifacts for working with different storage solutions. Existing solutions are limited and the proposed approach uses model-driven engineering with multiple levels of modeling and transformation to map between requirements and storage solutions.
This document describes Web2MexADL, a tool for discovering and verifying the architecture of software systems. It uses machine learning techniques to classify components and generate architecture views. The views are expressed using MexADL, an architecture description language. Web2MexADL was implemented as an Eclipse plugin and can recover MVC-based or clustered architectures. It aims to help maintain software by verifying architectures match intended patterns and quality metrics. Future work includes improving classification, supporting other languages/platforms, and non-web applications.
This document describes Model2Roo, a web application development tool based on the Eclipse Modeling Framework and Spring Roo. It allows generating web applications by transforming UML class diagrams into Spring Roo commands. The document discusses the background of the project, its objectives, related work, issues identified by users, technical issues addressed, and recent improvements made, including implementing transformations with Acceleo templates and improving support for properties and installation.
This document presents MexADL, an aspect-oriented approach for verifying the maintainability of software systems. MexADL uses an architecture description language (ADL) to define maintainability characteristics and sub-characteristics, and aspect-oriented programming techniques to verify those characteristics through internal quality metrics. The approach was implemented for Java systems and applied to a case study of a digital tax receipt validation application. The results demonstrated how architectural violations could be detected more effectively compared to manual verification.
The document introduces the Instance Model Bus, which aims to address issues that arise when model instances are tied to specific model definitions and repository technologies. It does this by providing a common interface to manage model instances independently of the underlying model and storage. The Instance Model Bus implementation allows Java applications to interact with model instances through a shared bus, regardless of how the models and instances are defined or where they are stored. An example shows how the bus allows different clients like a Spring Roo application and Eclipse plugin to access the same model instances without being directly coupled to each other's technologies.
This document describes ExSchema, a tool that discovers and maintains schemas from polyglot persistence applications. ExSchema analyzes the source code of applications using multiple data stores like document databases, graph databases, and relational databases. It identifies entities, attributes, relationships, and updates by examining declarations, annotations, project structure, and update operations. ExSchema represents the extracted schemas in a uniform metamodel and can output documentation and code artifacts to help developers understand and evolve the application's data design.
The document provides an overview of the Spring Framework. It discusses that Spring is an open source application framework for Java that provides inversion of control and dependency injection. The document outlines Spring's history and key modules. It also discusses advantages like decoupling layers and configuration, and how Spring addresses areas like web apps, databases, transactions, and remote access. The document explains inversion of control and dependency injection in Spring through Java beans and configuration files. It concludes with how to get started using Spring by downloading the framework files.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
The road ahead for architectural languages [ACVI 2016]Ivano Malavolta
5th of April 2016. My presentation done at the 3rd Architecture Centric Virtual Integration Workshop (ACVI) workshop, co-located with WICSA and Comparch 2016, Venice, Italy.
Accompanying paper: http://www.ivanomalavolta.com/files/papers/IEEESoftware_2015.pdf
This document describes Web2MexADL, a tool for discovering and verifying the architecture of software systems. It uses machine learning techniques to classify components and generate architecture views. The views are expressed using MexADL, an architecture description language. Web2MexADL was implemented as an Eclipse plugin and can recover MVC-based or clustered architectures. It aims to help maintain software by verifying architectures match intended patterns and quality metrics. Future work includes improving classification, supporting other languages/platforms, and non-web applications.
This document describes Model2Roo, a web application development tool based on the Eclipse Modeling Framework and Spring Roo. It allows generating web applications by transforming UML class diagrams into Spring Roo commands. The document discusses the background of the project, its objectives, related work, issues identified by users, technical issues addressed, and recent improvements made, including implementing transformations with Acceleo templates and improving support for properties and installation.
This document presents MexADL, an aspect-oriented approach for verifying the maintainability of software systems. MexADL uses an architecture description language (ADL) to define maintainability characteristics and sub-characteristics, and aspect-oriented programming techniques to verify those characteristics through internal quality metrics. The approach was implemented for Java systems and applied to a case study of a digital tax receipt validation application. The results demonstrated how architectural violations could be detected more effectively compared to manual verification.
The document introduces the Instance Model Bus, which aims to address issues that arise when model instances are tied to specific model definitions and repository technologies. It does this by providing a common interface to manage model instances independently of the underlying model and storage. The Instance Model Bus implementation allows Java applications to interact with model instances through a shared bus, regardless of how the models and instances are defined or where they are stored. An example shows how the bus allows different clients like a Spring Roo application and Eclipse plugin to access the same model instances without being directly coupled to each other's technologies.
This document describes ExSchema, a tool that discovers and maintains schemas from polyglot persistence applications. ExSchema analyzes the source code of applications using multiple data stores like document databases, graph databases, and relational databases. It identifies entities, attributes, relationships, and updates by examining declarations, annotations, project structure, and update operations. ExSchema represents the extracted schemas in a uniform metamodel and can output documentation and code artifacts to help developers understand and evolve the application's data design.
The document provides an overview of the Spring Framework. It discusses that Spring is an open source application framework for Java that provides inversion of control and dependency injection. The document outlines Spring's history and key modules. It also discusses advantages like decoupling layers and configuration, and how Spring addresses areas like web apps, databases, transactions, and remote access. The document explains inversion of control and dependency injection in Spring through Java beans and configuration files. It concludes with how to get started using Spring by downloading the framework files.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
The road ahead for architectural languages [ACVI 2016]Ivano Malavolta
5th of April 2016. My presentation done at the 3rd Architecture Centric Virtual Integration Workshop (ACVI) workshop, co-located with WICSA and Comparch 2016, Venice, Italy.
Accompanying paper: http://www.ivanomalavolta.com/files/papers/IEEESoftware_2015.pdf
Tom Chung is seeking a position in healthcare software or IT security consulting. He has over 15 years of experience developing software using languages like C++, Java, SQL, and Visual Basic. His background includes positions at Truven Health Analytics, Raytheon, SAIC, and IBM developing applications for healthcare, defense, and manufacturing clients.
[2015/2016] AADL (Architecture Analysis and Design Language)Ivano Malavolta
This document introduces the Architecture Analysis and Design Language (AADL) and uses a radar system as an example to demonstrate AADL modeling concepts. It breaks down the radar system into hardware and software components, showing how to model processes, threads, devices, and connections between them. It also models the deployment of software processes onto hardware processors and memories. The example illustrates key AADL concepts like components, features, connections, bindings, and properties.
Sean Lynch has over 15 years of experience leading teams that deliver software projects on time and under budget. He currently works as a Senior System Analyst and Solutions Architect at Blue Cross Blue Shield, where he has helped design and implement their large national data warehouse system. Previously he has worked as a consultant for several insurance and financial companies, managing projects and teams. He has a focus on quality results and extensive experience across the software development lifecycle.
This document summarizes a presentation on discovering implicit knowledge from architecture change logs. It discusses analyzing change logs to formalize change instances as graphs and discover operationalization patterns and dependencies. A graph-based change pattern notation is proposed to specify and retrieve patterns to support potential reuse in architecture evolution. Experimental analysis and evaluation of the approach uses scenario-based evaluation of case studies and prototype-based validation with surveys.
MoDisco is an Eclipse initiative that aims to provide a framework for extracting and exploiting models from legacy systems. It facilitates model-driven modernization tools for tasks like quality analysis, understanding legacy systems, reverse modeling, refactoring, and migration. MoDisco provides technology-specific and standard metamodels as well as discoverers to generate models from legacy artifacts like source code and databases. It has a modular architecture with layers for use cases, technologies, and infrastructure components.
The document provides an overview of the Struts framework, including its advantages and components. It discusses the Model 1 and Model 2 architectures, and explains that Struts implements the MVC pattern. It describes the controller elements like the action servlet and request processor, the model components like Java classes and beans, and the view components like JSP tag libraries. The document also provides examples of how Struts can be implemented in a sample application.
Thales has been deploying Arcadia and Capella MBSE methods and tools for the past 15 years. As for any journey, there have been many joys and not less difficulties.
During this webinar, Thales presents the foundations of their MBSE approach, how their engineering practices have been improved with the use of models, and what are they doing now to sustain and drive this model-based transformation.
---------
This webinar was driven by Juan Navas (from Thales)
Juan Navas is a Systems Architect with +10 years’ experience on performing and implementing Systems Engineering practices in industrial organizations. He accompanies systems engineering managers and systems architects implement Model-Based Systems Engineering and Product Line Engineering approaches in operational projects, helping them defining their engineering strategies, objectives and practices.
Discover models out of existing applications with Eclipse/MoDiscofmadiot
MoDisco is an Eclipse initiative that aims to provide a framework for extracting and exploiting models from legacy systems. It facilitates model-driven modernization tools for tasks like quality analysis, understanding legacy systems, reverse modeling, refactoring, and migration. MoDisco provides technology-specific and standard metamodels, as well as discoverers to create models from legacy artifacts like Java, C#, and databases. It has a modular architecture with layers for use cases, technologies, and infrastructure components.
[2017/2018] AADL - Architecture Analysis and Design LanguageIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This document provides a summary of the training and skills of an intern named Priyanka. It details her training in tools like Hibernate, JPA, and the Spring framework. It then provides overviews of Hibernate and the Spring framework, outlining some of their key advantages such as being lightweight, providing database independence, simplifying complex joins, and enabling loose coupling and fast development. Finally, it defines RESTful web services and notes some of their advantages over SOAP services, such as being fast, language and platform independent, and permitting different data formats.
Community Career Center: Introduction to Cloud Storage (Dropbox, Google Drive...Keitaro Matsuoka
Do you use (or want to use) Dropbox, Google Drive, or OneDrive? Do you want to know what you can do with them and how to pick the right one for you? Then this workshop is for you! It will cover:
1. Basics
2. Pricing
3. Everyday use
4. Security
5. How to get the most of each service
6. How to choose the best service for you
SkyDrive and Google Drive Cloud Storage OptionsVera Weber
This document compares Google Drive and Skydrive for accessing and sharing documents through cloud storage. Both platforms offer around 5-7 GB of storage and allow users to create, upload, and access recent files. However, Skydrive is based on Microsoft products while Google Drive is based on Google products. Skydrive also allows sharing of entire folders and easy syncing between desktop and cloud storage by dragging documents to the Skydrive icon on the taskbar.
In this presentation we will help you to “Understand Risk”, with detailed description on the concept, types and classification of risks while also talking about effective ways to handle different Types of Risk in the banking sector.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Google Drive is an online file storage and synchronization service developed by Google. It allows users to store files in the cloud, share files, and edit documents, spreadsheets, and presentations with collaborators. Some key features include live editing of files, access from any device, version history, large file sharing, and integration with Google services. The free version provides 5GB of storage while premium plans provide more storage for a monthly fee.
AWS Cloud School is a free full day of training sessions, guided examples and self-directed learning led by members of the Amazon Web Services team. Join us to learn how teams of all sizes can build scalable, reliable, high performance applications using the AWS Cloud platform.
This document summarizes cloud storage, including its history, features, business model, and future outlook. Cloud storage emerged in the 1980s and grew with improvements in broadband internet and supporting technologies. It offers automatic backup, data recovery, file sharing, and remote access. Companies make money through paid storage plans, partnerships, and commercializing other applications. While cloud storage provides benefits like large storage capacity and data availability, issues around technical support, data security, and platform restrictions remain. The future of cloud storage involves greater encryption standards and more businesses and applications moving to cloud-based models.
Cloud storage allows users to save files on remote servers rather than local hard drives, making files accessible from any internet-connected device. This contrasts with local storage on a specific device. Popular cloud services like Google Drive, OneDrive, and Dropbox offer free basic storage with paid upgrades, and allow file sharing and online editing. While third-party storage raises some data protection questions, cloud storage provides convenience and collaboration benefits that make it widely used at the school by departments and teachers.
Google Drive is a cloud-based storage and synchronization service that allows users to create, edit, and collaborate on documents, spreadsheets, presentations, and other files from any device with an Internet connection. It includes apps like Google Docs (word processor), Sheets (spreadsheet), Slides (presentations), as well as Keep (notes), Forms (surveys), Drawings, and more. Files are automatically saved and synced, allowing multiple users to work on the same file simultaneously. Users get 10GB of free online storage with a free account.
Cloud Computing: A Perspective on Next Basic Utility in IT World IRJET Journal
This document discusses cloud computing and its architecture. It begins with an introduction to cloud computing, defining it as a model that provides infrastructure, platforms, and software as services. The key characteristics and service models of cloud computing are described.
The document then discusses the architecture of cloud computing, including the layers of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the deployment models of private cloud, public cloud, community cloud, and hybrid cloud.
The document outlines several challenges of cloud computing, such as resource allocation and scheduling, cost optimization, processing time and speed, memory management, load balancing, security issues, fault
Tom Chung is seeking a position in healthcare software or IT security consulting. He has over 15 years of experience developing software using languages like C++, Java, SQL, and Visual Basic. His background includes positions at Truven Health Analytics, Raytheon, SAIC, and IBM developing applications for healthcare, defense, and manufacturing clients.
[2015/2016] AADL (Architecture Analysis and Design Language)Ivano Malavolta
This document introduces the Architecture Analysis and Design Language (AADL) and uses a radar system as an example to demonstrate AADL modeling concepts. It breaks down the radar system into hardware and software components, showing how to model processes, threads, devices, and connections between them. It also models the deployment of software processes onto hardware processors and memories. The example illustrates key AADL concepts like components, features, connections, bindings, and properties.
Sean Lynch has over 15 years of experience leading teams that deliver software projects on time and under budget. He currently works as a Senior System Analyst and Solutions Architect at Blue Cross Blue Shield, where he has helped design and implement their large national data warehouse system. Previously he has worked as a consultant for several insurance and financial companies, managing projects and teams. He has a focus on quality results and extensive experience across the software development lifecycle.
This document summarizes a presentation on discovering implicit knowledge from architecture change logs. It discusses analyzing change logs to formalize change instances as graphs and discover operationalization patterns and dependencies. A graph-based change pattern notation is proposed to specify and retrieve patterns to support potential reuse in architecture evolution. Experimental analysis and evaluation of the approach uses scenario-based evaluation of case studies and prototype-based validation with surveys.
MoDisco is an Eclipse initiative that aims to provide a framework for extracting and exploiting models from legacy systems. It facilitates model-driven modernization tools for tasks like quality analysis, understanding legacy systems, reverse modeling, refactoring, and migration. MoDisco provides technology-specific and standard metamodels as well as discoverers to generate models from legacy artifacts like source code and databases. It has a modular architecture with layers for use cases, technologies, and infrastructure components.
The document provides an overview of the Struts framework, including its advantages and components. It discusses the Model 1 and Model 2 architectures, and explains that Struts implements the MVC pattern. It describes the controller elements like the action servlet and request processor, the model components like Java classes and beans, and the view components like JSP tag libraries. The document also provides examples of how Struts can be implemented in a sample application.
Thales has been deploying Arcadia and Capella MBSE methods and tools for the past 15 years. As for any journey, there have been many joys and not less difficulties.
During this webinar, Thales presents the foundations of their MBSE approach, how their engineering practices have been improved with the use of models, and what are they doing now to sustain and drive this model-based transformation.
---------
This webinar was driven by Juan Navas (from Thales)
Juan Navas is a Systems Architect with +10 years’ experience on performing and implementing Systems Engineering practices in industrial organizations. He accompanies systems engineering managers and systems architects implement Model-Based Systems Engineering and Product Line Engineering approaches in operational projects, helping them defining their engineering strategies, objectives and practices.
Discover models out of existing applications with Eclipse/MoDiscofmadiot
MoDisco is an Eclipse initiative that aims to provide a framework for extracting and exploiting models from legacy systems. It facilitates model-driven modernization tools for tasks like quality analysis, understanding legacy systems, reverse modeling, refactoring, and migration. MoDisco provides technology-specific and standard metamodels, as well as discoverers to create models from legacy artifacts like Java, C#, and databases. It has a modular architecture with layers for use cases, technologies, and infrastructure components.
[2017/2018] AADL - Architecture Analysis and Design LanguageIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This document provides a summary of the training and skills of an intern named Priyanka. It details her training in tools like Hibernate, JPA, and the Spring framework. It then provides overviews of Hibernate and the Spring framework, outlining some of their key advantages such as being lightweight, providing database independence, simplifying complex joins, and enabling loose coupling and fast development. Finally, it defines RESTful web services and notes some of their advantages over SOAP services, such as being fast, language and platform independent, and permitting different data formats.
Community Career Center: Introduction to Cloud Storage (Dropbox, Google Drive...Keitaro Matsuoka
Do you use (or want to use) Dropbox, Google Drive, or OneDrive? Do you want to know what you can do with them and how to pick the right one for you? Then this workshop is for you! It will cover:
1. Basics
2. Pricing
3. Everyday use
4. Security
5. How to get the most of each service
6. How to choose the best service for you
SkyDrive and Google Drive Cloud Storage OptionsVera Weber
This document compares Google Drive and Skydrive for accessing and sharing documents through cloud storage. Both platforms offer around 5-7 GB of storage and allow users to create, upload, and access recent files. However, Skydrive is based on Microsoft products while Google Drive is based on Google products. Skydrive also allows sharing of entire folders and easy syncing between desktop and cloud storage by dragging documents to the Skydrive icon on the taskbar.
In this presentation we will help you to “Understand Risk”, with detailed description on the concept, types and classification of risks while also talking about effective ways to handle different Types of Risk in the banking sector.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Google Drive is an online file storage and synchronization service developed by Google. It allows users to store files in the cloud, share files, and edit documents, spreadsheets, and presentations with collaborators. Some key features include live editing of files, access from any device, version history, large file sharing, and integration with Google services. The free version provides 5GB of storage while premium plans provide more storage for a monthly fee.
AWS Cloud School is a free full day of training sessions, guided examples and self-directed learning led by members of the Amazon Web Services team. Join us to learn how teams of all sizes can build scalable, reliable, high performance applications using the AWS Cloud platform.
This document summarizes cloud storage, including its history, features, business model, and future outlook. Cloud storage emerged in the 1980s and grew with improvements in broadband internet and supporting technologies. It offers automatic backup, data recovery, file sharing, and remote access. Companies make money through paid storage plans, partnerships, and commercializing other applications. While cloud storage provides benefits like large storage capacity and data availability, issues around technical support, data security, and platform restrictions remain. The future of cloud storage involves greater encryption standards and more businesses and applications moving to cloud-based models.
Cloud storage allows users to save files on remote servers rather than local hard drives, making files accessible from any internet-connected device. This contrasts with local storage on a specific device. Popular cloud services like Google Drive, OneDrive, and Dropbox offer free basic storage with paid upgrades, and allow file sharing and online editing. While third-party storage raises some data protection questions, cloud storage provides convenience and collaboration benefits that make it widely used at the school by departments and teachers.
Google Drive is a cloud-based storage and synchronization service that allows users to create, edit, and collaborate on documents, spreadsheets, presentations, and other files from any device with an Internet connection. It includes apps like Google Docs (word processor), Sheets (spreadsheet), Slides (presentations), as well as Keep (notes), Forms (surveys), Drawings, and more. Files are automatically saved and synced, allowing multiple users to work on the same file simultaneously. Users get 10GB of free online storage with a free account.
Cloud Computing: A Perspective on Next Basic Utility in IT World IRJET Journal
This document discusses cloud computing and its architecture. It begins with an introduction to cloud computing, defining it as a model that provides infrastructure, platforms, and software as services. The key characteristics and service models of cloud computing are described.
The document then discusses the architecture of cloud computing, including the layers of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the deployment models of private cloud, public cloud, community cloud, and hybrid cloud.
The document outlines several challenges of cloud computing, such as resource allocation and scheduling, cost optimization, processing time and speed, memory management, load balancing, security issues, fault
This document discusses Cloud2Sim, a new concurrent and distributed cloud simulation tool that extends CloudSim. Cloud2Sim leverages distributed execution and storage capabilities of in-memory data grids to allow cloud simulations to run in a distributed manner across multiple nodes. This improves upon existing cloud simulators that typically run sequentially on a single computer. The document describes Cloud2Sim's design, implementation, evaluations showing its ability to reduce simulation time, and outlines future work such as incorporating search capabilities and optimizing object sizes.
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...Igor De Souza
With Industry 4.0, several technologies are used to have data analysis in real-time, maintaining, organizing, and building this on the other hand is a complex and complicated job. Over the past 30 years, we saw several ideas to centralize the database in a single place as the united and true source of data has been implemented in companies, such as Data wareHouse, NoSQL, Data Lake, Lambda & Kappa Architecture.
On the other hand, Software Engineering has been applying ideas to separate applications to facilitate and improve application performance, such as microservices.
The idea is to use the MicroService patterns on the date and divide the model into several smaller ones. And a good way to split it up is to use the model using the DDD principles. And that's how I try to explain and define DataMesh & Data Fabric.
The elephantintheroom bigdataanalyticsinthecloudKhazret Sapenov
The document discusses big data analytics in the cloud, including definitions of big data and analytics. It covers technologies like Hadoop, Dremel, and Storm, and how they can be used for business intelligence, operational intelligence, and value creation. It also discusses architecture considerations for big data analytic systems in the cloud, including data transfer speeds. The presentation aims to provide an overview of approaches for near real-time business intelligence and analytics using these technologies, both their applicability and limitations when used in the cloud.
This course introduces students to cloud computing, artificial intelligence, and decentralized applications technologies. Students will learn about major cloud platforms and related services for computing, storage, networking, big data, and machine learning. They will combine these services to create intelligent autonomous networked solutions. The course also covers decentralized computing using Ethereum, smart contracts, and developing decentralized applications. Students will build their own projects combining cloud and decentralized technologies.
Madhava Reddy has over 11 years of experience as an IT consultant specializing in application development. He has extensive experience designing and developing enterprise web applications using Java/J2EE technologies. He also has expertise in databases like Oracle, SQL Server, and MySQL. Reddy currently works as a Senior Software Engineer for Legatus Solutions Corporation where he is involved in designing and developing a web application called Motor Carrier.
This document provides 6 IEEE project summaries in the domain of Java and cloud computing/data mining. The summaries are:
1. A decentralized access control scheme for secure cloud data storage that supports anonymous authentication.
2. A performance analysis framework for distributed file systems that qualitatively and quantitatively evaluates performance.
3. Approaches to guarantee trustworthy transactions on cloud servers by enforcing policy consistency constraints.
4. A scalable MapReduce approach for anonymizing large datasets to satisfy privacy requirements like k-anonymity.
5. A resource allocation scheme for a self-organizing cloud that achieves maximized utilization and optimal execution efficiency.
6. An attribute-based encryption framework for flexible
The document provides an overview of cloud computing concepts, technologies, and business implications. It discusses cloud models including IaaS, PaaS, and SaaS. It demonstrates cloud capabilities through examples on Amazon AWS, Google App Engine, and Windows Azure. It also covers MapReduce and graph processing as cloud programming models and provides a case study on using cloud computing for a predictive quality project.
The document discusses cloud computing concepts and technologies. It provides an introduction to cloud models like IaaS, PaaS and SaaS and demonstrates cloud capabilities through examples on Amazon AWS, Google App Engine and Windows Azure. It also discusses the Hadoop distributed file system and MapReduce programming model for large scale data processing in the cloud.
The document discusses cloud computing concepts and technologies. It provides an introduction to cloud models like IaaS, PaaS and SaaS and demonstrates cloud capabilities through examples on Amazon AWS, Google App Engine and Windows Azure. It also discusses the Hadoop distributed file system and MapReduce programming model for large scale data processing in the cloud.
ClouNS - A Cloud-native Application Reference Model for Enterprise ArchitectsNane Kratzke
The capability to operate cloud-native applications can create enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.
The document discusses performance evaluation of different cloud computing architectures and deployment models. It begins by defining cloud architecture and deployment models, including public, private and hybrid clouds as well as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It then discusses defining test scenarios, identifying architectures and models to evaluate, and preparing a report on the performance evaluation methodology, test results and analysis. The document also provides a literature review on previous research related to evaluating cloud platforms, characteristics of cloud deployment models, components of cloud architecture, and algorithms for handling constraints. It concludes by identifying research gaps in evaluating specific deployment models, a lack of real-world evaluations, limited research
Simplifying Cloud Architectures with Data VirtualizationDenodo
Watch here: https://bit.ly/2yxLo6f
Moving applications and data to the Cloud is a priority for many organizations. The benefits - in terms of flexibility, agility, and cost savings - are driving Cloud adoption. However, the journey to the Cloud is not as easy as many people think. The process of moving application and data to the Cloud is challenging and can entail widespread disruption across the organization if not carefully managed. Even when systems are migrated to the Cloud, the resultant hybrid or multi-Cloud architecture is more complex for users to navigate, making it harder for them to get the data that they need to do their jobs.
Data Virtualization can help organizations at all stages of their journey to the Cloud - during migration and also in the resultant hybrid or multi-Cloud architectures. Attend this webinar to learn how Data Virtualization can:
- Help organizations manage risk and minimize the disruption caused as systems are moved to the Cloud
- Provide a single point of access for data that is both on-premise and in the Cloud, making it easier for users to find and access the data that they need
- Provide a security layer to protect and manage your data when it's distributed across hybrid or multi-Cloud architectures
The document proposes a method for crawling the configurations of virtual appliances in cloud computing environments. It involves discovering appliances using metadata from cloud APIs, crawling configurations in parallel using configuration management agents, and storing the results in a centralized data store. The method was validated with an implementation for Amazon EC2 that used Chef to detect configurations and stored results in MongoDB and Google App Engine. Potential applications of the collected configuration metadata include generating configuration manifests for interoperability and using the data to support decision making.
Towards CloudML, a Model-Based Approach to Provision Resources in the CloudsSébastien Mosser
The Cloud-computing paradigm advocates the use of re- sources available “in the clouds”. In front of the multiplicity of cloud providers, it becomes cumbersome to manually tackle this heterogene- ity. In this paper, we propose to define an abstraction layer used to model resources available in the clouds. This cloud modelling language (CloudML) allows cloud users to focus on their needs, i.e., the modelling the resources they expect to retrieve in the clouds. An automated provi- sioning engine is then used to automatically analyse these requirements and actually provision resources in clouds. The approach is implemented, and was experimented on prototypical examples to provision resources in major public clouds (e.g., Amazon EC2 and Rackspace).
Scaling Multi-Cloud Deployments with Denodo: Automated Infrastructure ManagementDenodo
Watch full webinar here: https://bit.ly/3oWR1Bl
The future of infrastructure management lies in automation. In this session, Denodo subject matter expert will talk about how in a multi-cloud scenario, the infrastructure can be automatically managed transparently via a web GUI. Audience will get to see that in action through a live demo.
A Successful Journey to the Cloud with Data VirtualizationDenodo
Watch full webinar here: https://bit.ly/3mPLIlo
A shift to the cloud is a common element of any current data strategy. However, a successful transition to the cloud is not easy and can take years. It comes with security challenges, changes in downstream and upstream applications, and new ways to operate and deploy software. An abstraction layer that decouples data access from storage and processing can be a key element to enable a smooth journey to the cloud.
Attend this webinar to learn more about:
- How to use Data Virtualization to gradually change data systems without impacting business operations
- How Denodo integrates with the larger cloud ecosystems to enable security
- How simple it is to create and manage a Denodo cloud deployment
(R)evolution of the computing continuum - A few challengesFrederic Desprez
Initially proposed to interconnect computers worldwide, the Internet has significantly evolved to become in two decades a key element in almost all our activities. This (r)evolution mainly relies on the progress that has been achieved in computation and communication fields and that has led to the well-known and widely spread Cloud Computing paradigm.
With the emergence of the Internet of Things (IoT), stakeholders expect a new revolution that will push, once again, the limits of the Internet, in particular by favouring the convergence between physical and virtual worlds. This convergence is about to be made possible thanks to the development of minimalist sensors as well as complex industrial physical machines that can be connected to the Internet through edge computing infrastructures.
Among the obstacles to this new generation of Internet services is the development of a convenient and powerful framework that should allow operators, and devops, to manage the life-cycle of both the digital infrastructures and the applications deployed on top of these infrastructures, throughout the cloud to IoT continuum.
In this keynote, Frédéric Desprez and his colleague Adrien Lebre presented research issues and provide preliminary answers to identify whether the challenges brought by this new paradigm is an evolution or a revolution for our community.
cloud computing - concepts and technologies and mechanisms of tackling problems in cloud
you plz ignore who created it , plz focus on problem oriented points
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Model-Driven Cloud Data Storage
1. Model-Driven
Cloud Data Storage
Juan Castrejón, Genoveva Vargas-Solar, Christine Collet, Rafael Lozano
Université de Grenoble, CNRS, Grenoble INP, Tecnológico de Monterrey
CloudMDE 2012
2. 2
Background
• Cloud computing (NIST-2011)
• Utility computing model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable resources
• Cloud data storage (Ruiz-2011, Armbrust-2009)
• Store, retrieve and manage large amounts of data, using highly
scalable distributed infrastructures
• Polyglot persistence (Fowler-2011)
• Different data storage technologies for different kinds of data
• Each storage mechanism introduces a new interface to be learned
• To get decent performance, you have to understand a lot about
how the technology works
3. 3
Background
• Variety of data storage models and implementations
(Cattell-2011, Edlich-2012)
• Models: key-value, document, extensible record, graph, blob,
object, queue, xml, relational
• Implementations: Redis, Voldemort, MongoDB, CouchDB,
Cassandra, Neo4J, db4o, eXist-db, etc. (As of today, over 120 options)
• Cloud deployment environments (Ruiz-2011)
• Different combinations of pricing, support, service level
agreements, and management APIs
• Public providers (Amazon, Windows Azure, Xeround, etc.)
• Private providers (Eucalyptus, OpenNebula, etc.)
4. 4
Use the right tool for the right job…
How do I know which is the
right tool for the right job?
(Katsov-2012)
5. 5
Problem
• How to specify data requirements for cloud environments?
• For a set of data requirements, how to choose an
appropriate combination of cloud storage system
implementation and deployment provider?
• How to generate/manage everything that’s required to
work with the selection that I make?
6. 6
Existing solutions
• Integration of cloud storage platforms (Livenson-2011)
• Cloud Data Management Interface (CDMI) (SNIA-2011) proxy to
integrate blob and queue data stores
• Data integration over NoSQL stores (Curé-2011)
• Integration of relational and NoSQL databases (Document, column)
• Focus on efficient answering of queries
• Storage provider selection (Ruiz-2011, Ruiz-2012)
• Characterize storage providers features (Ex: performance, cost)
• Specify requirements for application datasets (Ex: expected size,
access latency, concurrent clients)
• Based on the previous information, an assignment of datasets to
different storage systems is proposed
7. 7
Existing solutions
• Modeling as a Service (Bruneliere-2010)
• Deploy and execute model-driven services over the Internet (SaaS)
• Design and deploy applications in the cloud (Peidro-2011)
• Promotes graphical models to capture cloud requirements
• Models automatically deployed to PaaS and IaaS environments
• Application design/execution in multiple clouds (Ardagna-2012)
• MDE quality-driven method for design, development and operation
• Monitoring and feedback system
8. 8
Limitations of existing solutions
• Support for a limited set of cloud storage interfaces
• Data integration can be highly based on the relational
model
• Limited information for the selection of data storage
systems
• Consideration for high-level cloud models (SaaS) but
limited support for low-level models (PaaS and IaaS)
9. 9
Objectives
1. Provide adequate notations and environments to
characterize cloud data storage requirements
2. Selection of cloud data storage implementations and
deployment providers
3. Management of the required artifacts to work with
different combinations of cloud storage implementations
and providers
10. 10
Objectives
Cloud
requirements
Conceptual High-level of abstraction
models (Conceptual models and environments)
Selection process Logical Logical Logical
Artifacts management model model model
Physical Physical Physical Low-level of abstraction
model model model (Storage implementations and providers)
11. 11
Proposed solution
• Rely on Model-Driven Engineering (MDE) (Kent-2002) to:
• Characterize cloud storage requirements
• Encapsulate selection, administration and use of cloud data
storage implementations
• Why MDE?
• Avoid dependencies between high-level (data models) and low-
level abstractions (storage implementations and providers)
• Emphasis on relying on different levels of modeling notations
• Generation of low-level abstractions by using automatic
transformation procedures
12. 12
Objective 1: Data requirements for the cloud
• Do traditional modeling notations (ER and UML diagrams)
make sense for data storage in the cloud?
• Define-extend notations and environments for cloud data modeling
• What requirements should a cloud data storage notation
consider?
• Rely on quality standards (ISO/IEC SQuaRE, S-Cube) to guide this
analysis. Example: performance, efficiency, portability, etc.
• How to characterize the proposed requirements?
• Associate quality metrics relevant to (cloud) scenarios, based on
the characteristics of the reference standard (Jureta-2010)
• Validate currently proposed metrics. For example: throughput, cost,
access latency, etc.
13. 13
Objective 2: Data storage selection
• Based on the analysis of historic data and usage patterns
• Both in test applications and within systems generated in our modeling
environment
• Monitoring data is gathered in a non-intrusive manner
• AOP monitoring
• Monitor the behaviour of the selected implementation/providers, based
on the metrics specified in the modeling environment
• Compare expected values and actual performance
• Monitoring data is shared in open/collaborative manner
• Used by our decision process
• Available for external users
• Users could work, at the same time, with multiple combinations
of storage implementations and providers
• Test the performance of the different combinations
14. 14
Objective 3: Cloud artifacts management
• Generate the low-level artifacts to work with data storage
implementations and deployment providers
• Configuration files for deployment providers
• Data management interfaces (CDMI, Spring Data, etc.)
• Different levels of transformation procedures
• From the high-level data model to an intermediate Domain Specific
Language (DSL) (Liu-2010, SpringRoo-2012)
• From the intermediate DSL to configuration files, AOP monitoring
aspects and data management interfaces (SpringData-2012)
• MDE transformation techniques
• Model-to-Model (M2M), Model-to-Text (M2T)
15. 15
Proof of concept Work in progress…
1
• Extension - Model2Roo (http://code.google.com/p/model2roo/)
High-level
abstractions
Java
web
App
Spring Data
UML class diagram Spring Roo
2
Low-level
abstractions
Graph database
Relational database
16. 16
Preliminary results
• Castrejón, J., Vargas-Solar, G., Collet, C., Lozano, R., :
“Model-Driven Cloud Data Storage”. In: First International
Workshop on Model-Driven Engineering on and for the
Cloud (CloudMDE 2012). Co-located with ECMFA ’12.
July 2012
• Castrejón, J., Vargas-Solar, G., Lozano, R., : “Model2Roo:
Web Application Development based on the Eclipse
Modeling Framework and Spring Roo”. In: First Workshop
on Academics Modeling with Eclipse (ACME 2012). Co-
located with ECMFA ’12. July 2012
18. 18
References
• Ardagna, D., Di Nitto, E., Casale, G., et al. MODACLOUDS, A Model-Driven Approach for the
Design and Execution of Applications on Multiple Clouds. Models in Software Engineering
Workshop (MiSE 2012). Co-located with ICSE ’12. (2012)
• Armbrust M. , Fox A., Griffith R., Joseph A. D, et al. Above the Clouds: A Berkeley View of
Cloud Computing, 2009.
• Bruneliere, H., Cabot, J., Jouault, F.: Combining model-driven engineering and cloud
computing. In: Modeling, Design, and Analysis for the Service Cloud Workshop.
MDA4ServiceCloud ’10 (2010)
• Cattell, R.: Scalable sql and nosql data stores. SIGMOD Rec. 39, 12–27 (May 2011)
• Curé, O., Hecht, R., Le Duc, C., Lamolle, M.: Data Integration over NoSQL Stores Using
Access Path Based Mappings. A. In: Proceedings of the 22nd International Conference on
Database and Expert Systems Applications (DEXA 2011). Hameurlain et al. (Eds.), Part I,
LNCS 6860, pp. 481–495, (2011)
• Edlich, S.: List of nosql databases. http://nosqldatabase.org/ (March 2012)
• Fowler, M.: Polyglot persistence. http://martinfowler.com/bliki/PolyglotPersistence.html
(November 2011)
• Jureta, I., Borgida, A., Ernst, N., Mylopoulos, J.: Techne: Towards a New Generation of
Requirements Modeling Languages with Goals, Preferences, and Inconsistency Handling. In:
Proceedings of the 18th IEEE International Requirements Engineering Conference. pp.
115-124. RE 2010. IEEE Computer Society (2010)
• Katsov, I.: Nosql data modeling techniques. http://highlyscalable.wordpress.com/ 2012/03/01/
nosql-data-modeling-techniques/ (March 2012)
19. 19
References
• Kent, S.: Model driven engineering. In: Butler, M., Petre, L., Sere, K. (eds.) Integrated Formal Methods,
LNCS, vol. 2335, pp. 286–298. Springer Berlin (2002)
• Lenzerini, M.: Data integration is harder than you thought. In: Proceedings of the 9th International
Conference on Cooperative Information Systems. pp. 22-26. CooplS ’01, Springer-Verlag, London, UK
(2001)
• Livenson, I., Laure, E.: Towards Transparent Integration of Heterogeneous Cloud Storage Platforms. In:
Fourth International Workshop on Data Intensive Distributed Computing. DIDC ’11. Co-located with HDPC
‘11 (2011)
• Liu, D., Zic, J.: Cloud#: A specification language for modeling cloud. In: Proceedings of the 2011 IEEE 4th
International Conference on Cloud Computing. pp. 533–540. CLOUD ’11, IEEE Computer Society,
Washington, DC, USA (2011)
• Peidro, J.E., Muñoz-Escoí, F.D.: Towards the next generation of model driven cloud platforms. In: 1st
International Conference on Cloud Computing and Services Science. pp. 494–500. CLOSER ’11 (2011)
• Ruiz-Alvarez, A., Humphrey, M.: An automated approach to cloud storage service selection. In: Proceedings
of the 2nd international workshop on Scientific cloud computing. pp. 39–48. ScienceCloud ’11, ACM, New
York, NY, USA (2011)
• Ruiz-Alvarez, A., Humphrey, M.: A model and decision procedure for data storage in cloud computing. In:
Proceedings of the IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing. CCGrid ’12
(2012)
• Storage Networking Industry Association (SNIA): Cloud data management interface (CDMI). http://
www.snia.org/cdmi (September 2011)
• SpringSource: Spring data projects. http://www.springsource.org/spring-data (March 2012)
• SpringSource: Spring roo. http://www.springsource.org/spring-roo (March 2012)