This talk will provide an overview of the PaaS (Platform as a Service) landscape, and will describe the Cloud Foundry open source PaaS, with its multi-framework, multi-service, multi-cloud model. Cloud Foundry allows developers to provision apps in Java/Spring, Ruby/Rails, Ruby/Sinatra, Javascript/Node, and leverage services like MySQL, MongoDB, Reddis, Postgres and RabbitMQ. It can be used as a public PaaS on CloudFoundry.com and other service providers (ActiveState, AppFog), to create your own private cloud, or on your laptop using the Micro Cloud Foundry VM. Micro Cloud Foundry is a very easy way for developers to start working on their application using their framework of choice and MongoDB, without the need to setup a development environment, and your app is one command line away (vmc push) from deployment to cloudfoundry.com.
This talk show how Spring technologies can help to develop applications for the cloud. PaaS like Google App Engine, Amazon Beanstalk, Cloud Bees and Cloud Foundry are shown as well as other technologies such as NoSQL, RabbitMQ and Hadoop.
Slides from QConSF Nov 19th, 2011 focusing this time on describing the globally distributed and scaled industrial strength Java Platform as a Service that Netflix has built and run on top of AWS and Cassandra. Parts of that platform are being released as open source - Curator, Priam and Astyanax.
Distributed Design and Architecture of Cloud FoundryDerek Collison
In this session we will dig deep into Cloud Foundry's core architecture and design principles. We will discuss the challenges around scaling and operating a large scale service as we combined the PaaS and traditional IaaS layers, and how we achieve multiple updates per week to the system with no perceived downtime. Allowing user to download a single virtual machine that is a complete replica of the service presented some challenges as well, and we will discuss our approach to offering up the downloadable private cloud.
This talk covers Kafka cluster sizing, instance type selections, scaling operations, replication throttling and more. Don’t forget to check out the Kafka-Kit repository.
https://www.youtube.com/watch?time_continue=2613&v=7uN-Vlf7W5E
Introducing NoSQL and MongoDB to complement Relational Databases (AMIS SIG 14...Lucas Jellema
This presentation gives an brief overview of the history of relational databases, ACID and SQL and presents some of the key strentgths and potential weaknesses. It introduces the rise of NoSQL - why it arose, what is entails, when to use it. The presentation focuses on MongoDB as prime example of NoSQL document store and it shows how to interact with MongoDB from JavaScript (NodeJS) and Java.
Resilient microservices with Kubernetes - Mete AtamelITCamp
Creating a single microservice is a well understood problem. Creating a cluster of load-balanced microservices that are resilient and self-healing is not so easy. Managing that cluster with rollouts and rollbacks, scaling individual services on demand, securely sharing secrets and configuration among services is even harder. Kubernetes, an open-source container management system, can help with this. In this talk, we will start with a simple microservice, containerize it using Docker, and scale it to a cluster of resilient microservices managed by Kubernetes. Along the way, we will learn what makes Kubernetes a great system for automating deployment, operations, and scaling of containerized applications.
This talk show how Spring technologies can help to develop applications for the cloud. PaaS like Google App Engine, Amazon Beanstalk, Cloud Bees and Cloud Foundry are shown as well as other technologies such as NoSQL, RabbitMQ and Hadoop.
Slides from QConSF Nov 19th, 2011 focusing this time on describing the globally distributed and scaled industrial strength Java Platform as a Service that Netflix has built and run on top of AWS and Cassandra. Parts of that platform are being released as open source - Curator, Priam and Astyanax.
Distributed Design and Architecture of Cloud FoundryDerek Collison
In this session we will dig deep into Cloud Foundry's core architecture and design principles. We will discuss the challenges around scaling and operating a large scale service as we combined the PaaS and traditional IaaS layers, and how we achieve multiple updates per week to the system with no perceived downtime. Allowing user to download a single virtual machine that is a complete replica of the service presented some challenges as well, and we will discuss our approach to offering up the downloadable private cloud.
This talk covers Kafka cluster sizing, instance type selections, scaling operations, replication throttling and more. Don’t forget to check out the Kafka-Kit repository.
https://www.youtube.com/watch?time_continue=2613&v=7uN-Vlf7W5E
Introducing NoSQL and MongoDB to complement Relational Databases (AMIS SIG 14...Lucas Jellema
This presentation gives an brief overview of the history of relational databases, ACID and SQL and presents some of the key strentgths and potential weaknesses. It introduces the rise of NoSQL - why it arose, what is entails, when to use it. The presentation focuses on MongoDB as prime example of NoSQL document store and it shows how to interact with MongoDB from JavaScript (NodeJS) and Java.
Resilient microservices with Kubernetes - Mete AtamelITCamp
Creating a single microservice is a well understood problem. Creating a cluster of load-balanced microservices that are resilient and self-healing is not so easy. Managing that cluster with rollouts and rollbacks, scaling individual services on demand, securely sharing secrets and configuration among services is even harder. Kubernetes, an open-source container management system, can help with this. In this talk, we will start with a simple microservice, containerize it using Docker, and scale it to a cluster of resilient microservices managed by Kubernetes. Along the way, we will learn what makes Kubernetes a great system for automating deployment, operations, and scaling of containerized applications.
Building Reactive Fast Data & the Data Lake with Akka, Kafka, SparkTodd Fritz
In this session, we will discuss:
* reactive architecture tenets
* distributed “fast data” streams
* application and analytics focused Data Lake
Enterprise level concerns and the importance of holistic governance, operational management, and a Metadata Lake will be conceptually investigated. The next level of detail will be to explore what a prospective architecture looks like at scale with Terabytes of ingestion per day, how scale puts pressure on an architecture, and how to be successful without losing data in a mission critical system via resilient, self-healing, scalable technologies. DevOps and application architecture concerns will be first-class themes throughout.
Reactive principles and technology will be the second act of this talk. Kafka. Akka. Spark. Various streaming technologies (Kafka Streams, Akka Streams, Spark Streaming) will be reviewed to identify what they are best suited for. The fast data pipeline discussion will center around Kafka, Akka, and Apache Flink (Lightbend Fast Data platform). We’ll also walk through an exciting addition to the Akka family, Alpakka, which is a Camel equivalent for Enterprise Integration Patterns.
The final act will be to dive into the Data Lake, from both an analytics and application development perspective. Technologies used to explain concepts will include Amazon and Hadoop. A Data Lake may service multiple analytics consumers with various “views” (and access levels) of data. It may also be a participant of various applications, perhaps by acting as a centralized source for reference data or common middleware (in turn feeding the analytics aspect). The concept of the Metadata Lake to apply structure, meaning and purpose will be an over-arching success factor for a Data Lake. The difference between the Data Lake and Metadata Lake is conceptually similar to a Halocline… Various technologies (Iglu/Snowplow and more) will be discussed from a feature standpoint to flesh out the technology capabilities needed for Data Lake governance.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
OpenNebulaConf2015 1.03 Private, Public, Hybrid: The Real Economics of Open S...OpenNebula Project
With all the debate on public, private and hybrid clouds one of the main missing points is hard data: what is the actual, real economic impact of choosing a specific cloud model. We will present the results of an extensive survey of cost models, what is the impact of choosing an open source cloud platform like OpenNebula, the difference between planning for “cattle or cows” and how to compare different clouds using reliable performance metrics. We will also present a small sample of potentially relevant open source project that may help in deployment and management of ad-hoc cloud platforms.
Author Biography
Carlo Daffara the Technical director of Cloudweavers, a company that developed the first hyperconverged appliance based on OpenNebula; Italian member of the European Working Group on Libre Software and co-coordinator of the working group on SMEs of the EU ICT task force on competitiveness. Since 1999, works as evaluator for IST programme submissions in the field of component-based software engineering, GRIDs and international cooperation. Coordinator of the open source platforms technical area of the IEEE technical committee on scalable computing, co-chair of the SIENA EU cloud initiative roadmap editorial board and part of the editorial review board of the International Journal of Open Source Software & Processes (IJOSSP). Has worked as a researcher in the field of collaborative development and open source business models; working with international entitities to promote the development of economic networks through open source software, recently worked with public authorities like UK JISC and CENATIC on estimating the economic impact of cloud computing and the adoption of open source development models. For OpenForum Europe has developed the first Europe-wide macroeconomic analysis of the economic value introduced by the adoption of open source software.
Strimzi - Where Apache Kafka meets OpenShift - OpenShift Spain MeetUpJosé Román Martín Gil
Apache Kafka is the most used data streaming broker by companies. It could manage millions of messages easily and it is the base of many architectures based in events, micro-services, orchestration, ... and now cloud environments. OpenShift is the most extended Platform as a Service (PaaS). It is based in Kubernetes and it helps the companies to deploy easily any kind of workload in a cloud environment. Thanks many of its features it is the base for many architectures based in stateless applications to build new Cloud Native Applications. Strimzi is an open source community that implements a set of Kubernetes Operators to help you to manage and deploy Apache Kafka brokers in OpenShift environments.
These slides will introduce you Strimzi as a new component on OpenShift to manage your Apache Kafka clusters.
Slides used at OpenShift Meetup Spain:
- https://www.meetup.com/es-ES/openshift_spain/events/261284764/
8 Lessons Learned from Using Kafka in 1000 Scala microservices - Scale by the...Natan Silnitsky
Kafka is the bedrock of Wix's distributed microservices system. For the last 5 years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1500 microservices.
We’ve managed to achieve higher decoupling and independence for our various services and dev teams that have very different use-cases while maintaining a single uniform infrastructure in place.
In these slides you will learn about 8 key decisions and steps you can take in order to safely scale-up your Kafka-based system. These include:
* How to increase dev velocity of event driven style code.
* How to optimize working with Kafka in polyglot setting
* How to support growing amount of traffic and developers.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
Event Sourcing, Stream Processing and Serverless (Benjamin Stopford, Confluen...confluent
In this talk we’ll look at the relationship between three of the most disruptive software engineering paradigms: event sourcing, stream processing and serverless. We’ll debunk some of the myths around event sourcing. We’ll look at the inevitability of event-driven programming in the serverless space and we’ll see how stream processing links these two concepts together with a single ‘database for events’. As the story unfolds we’ll dive into some use cases, examine the practicalities of each approach-particularly the stateful elements-and finally extrapolate how their future relationship is likely to unfold. Key takeaways include: The different flavors of event sourcing and where their value lies. The difference between stream processing at application- and infrastructure-levels. The relationship between stream processors and serverless functions. The practical limits of storing data in Kafka and stream processors like KSQL."
[db tech showcase Tokyo 2017] C24:Taking off to the clouds. How to use DMS in...Insight Technology, Inc.
The presentation will discuss challenges and problems people experience during migration to a cloud. We check what set of tools AWS offers to overcome those problems and how to use AWS Database Migration Service (DMS) and Schema Conversion Tool. We will go through the process, supported engines, different modes, options and possible problems. Primarily the session is going to be focused on Oracle database migration but we also touch other engines and areas where the tool can be used.
This is the meat of the presentation, it describes in detail how do use anti-architecture to define what gets done, then discusses patterns, type systems, PaaS frameworks, services and components. There is a detailed explanation of Cassandra as a data store and open source components.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1awkL99.
Details on Pinterest's architeture, its systems -Pinball, Frontdoor-, and stack - MongoDB, Cassandra, Memcache, Redis, Flume, Kafka, EMR, Qubole, Redshift, Python, Java, Go, Nutcracker, Puppet, etc. Filmed at qconsf.com.
Yash Nelapati is an infrastructure engineer at Pinterest where he focusses on scalability, capacity planning and architecture. Prior to Pinterest he was into web development and rapidly prototyping UI. Marty Weiner joined Pinterest in early 2011 as the 2nd engineer. Previously worked at Azul Systems as a VM engineer focused on building/improving the JIT compilers in HotSpot.
Real-time Streaming Pipelines with FLaNKData Con LA
Introducing the FLaNK stack which combines Apache Flink, Apache NiFi and Apache Kafka to build fast applications for IoT, AI, rapid ingest and deploy them anywhere. I will walk through live demos and show how to do this yourself.
FLaNK provides a quick set of tools to build applications at any scale for any streaming and IoT use cases.
We will discuss a use case - Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)
Bio -
Tim Spann is an avid blogger and the Big Data Zone Leader for Dzone (https://dzone.com/users/297029/bunkertor.html). He runs the the successful Future of Data Princeton meetup with over 1200 members at http://www.meetup.com/futureofdata-princeton/. He is currently a Senior Solutions Engineer at Cloudera in the Princeton New Jersey area. You can find all the source and material behind his talks at his Github and Community blog:
https://github.com/tspannhw/ApacheDeepLearning201
https://community.hortonworks.com/users/9304/tspann.html
Embracing Database Diversity with Kafka and DebeziumFrank Lyaruu
There was a time not long ago when we used relational databases for everything. Even if the data wasn’t particularly relational, we shoehorned it into relational tables, often because that was the only database we had. Thank god these dark times are over and now we have many different kinds of NoSQL databases: Document, realtime, graph, column, but that does not solve the problem that the same data might be a graph from one perspective, but a collection of documents from another.
It would be really nice if we can access that same data in many different ways, depending on the context of what we want to achieve in our current task.
As software architects this is not easy to solve but definitely possible: We can design an architecture using Event Sourcing: Capture the data with Debezium, post it to a Kafka queue, use Kafka Streams to model the data the way we like, and store the data in various different data sources, so we can synchronize data between data sources.
using Spring and MongoDB on Cloud FoundryJoshua Long
This talk introduces how to build MongoDB applications with Spring Data MongoDB on Cloud Foundry. Spring Data provides rich support for easily building applications that work on multiple data stores.
Building Reactive Fast Data & the Data Lake with Akka, Kafka, SparkTodd Fritz
In this session, we will discuss:
* reactive architecture tenets
* distributed “fast data” streams
* application and analytics focused Data Lake
Enterprise level concerns and the importance of holistic governance, operational management, and a Metadata Lake will be conceptually investigated. The next level of detail will be to explore what a prospective architecture looks like at scale with Terabytes of ingestion per day, how scale puts pressure on an architecture, and how to be successful without losing data in a mission critical system via resilient, self-healing, scalable technologies. DevOps and application architecture concerns will be first-class themes throughout.
Reactive principles and technology will be the second act of this talk. Kafka. Akka. Spark. Various streaming technologies (Kafka Streams, Akka Streams, Spark Streaming) will be reviewed to identify what they are best suited for. The fast data pipeline discussion will center around Kafka, Akka, and Apache Flink (Lightbend Fast Data platform). We’ll also walk through an exciting addition to the Akka family, Alpakka, which is a Camel equivalent for Enterprise Integration Patterns.
The final act will be to dive into the Data Lake, from both an analytics and application development perspective. Technologies used to explain concepts will include Amazon and Hadoop. A Data Lake may service multiple analytics consumers with various “views” (and access levels) of data. It may also be a participant of various applications, perhaps by acting as a centralized source for reference data or common middleware (in turn feeding the analytics aspect). The concept of the Metadata Lake to apply structure, meaning and purpose will be an over-arching success factor for a Data Lake. The difference between the Data Lake and Metadata Lake is conceptually similar to a Halocline… Various technologies (Iglu/Snowplow and more) will be discussed from a feature standpoint to flesh out the technology capabilities needed for Data Lake governance.
Billions of Messages in Real Time: Why Paypal & LinkedIn Trust an Engagement ...confluent
(Bruno Simic, Solutions Engineer, Couchbase)
Breakout during Confluent’s streaming event in Munich. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
OpenNebulaConf2015 1.03 Private, Public, Hybrid: The Real Economics of Open S...OpenNebula Project
With all the debate on public, private and hybrid clouds one of the main missing points is hard data: what is the actual, real economic impact of choosing a specific cloud model. We will present the results of an extensive survey of cost models, what is the impact of choosing an open source cloud platform like OpenNebula, the difference between planning for “cattle or cows” and how to compare different clouds using reliable performance metrics. We will also present a small sample of potentially relevant open source project that may help in deployment and management of ad-hoc cloud platforms.
Author Biography
Carlo Daffara the Technical director of Cloudweavers, a company that developed the first hyperconverged appliance based on OpenNebula; Italian member of the European Working Group on Libre Software and co-coordinator of the working group on SMEs of the EU ICT task force on competitiveness. Since 1999, works as evaluator for IST programme submissions in the field of component-based software engineering, GRIDs and international cooperation. Coordinator of the open source platforms technical area of the IEEE technical committee on scalable computing, co-chair of the SIENA EU cloud initiative roadmap editorial board and part of the editorial review board of the International Journal of Open Source Software & Processes (IJOSSP). Has worked as a researcher in the field of collaborative development and open source business models; working with international entitities to promote the development of economic networks through open source software, recently worked with public authorities like UK JISC and CENATIC on estimating the economic impact of cloud computing and the adoption of open source development models. For OpenForum Europe has developed the first Europe-wide macroeconomic analysis of the economic value introduced by the adoption of open source software.
Strimzi - Where Apache Kafka meets OpenShift - OpenShift Spain MeetUpJosé Román Martín Gil
Apache Kafka is the most used data streaming broker by companies. It could manage millions of messages easily and it is the base of many architectures based in events, micro-services, orchestration, ... and now cloud environments. OpenShift is the most extended Platform as a Service (PaaS). It is based in Kubernetes and it helps the companies to deploy easily any kind of workload in a cloud environment. Thanks many of its features it is the base for many architectures based in stateless applications to build new Cloud Native Applications. Strimzi is an open source community that implements a set of Kubernetes Operators to help you to manage and deploy Apache Kafka brokers in OpenShift environments.
These slides will introduce you Strimzi as a new component on OpenShift to manage your Apache Kafka clusters.
Slides used at OpenShift Meetup Spain:
- https://www.meetup.com/es-ES/openshift_spain/events/261284764/
8 Lessons Learned from Using Kafka in 1000 Scala microservices - Scale by the...Natan Silnitsky
Kafka is the bedrock of Wix's distributed microservices system. For the last 5 years we have learned a lot about how to successfully scale our event-driven architecture to roughly 1500 microservices.
We’ve managed to achieve higher decoupling and independence for our various services and dev teams that have very different use-cases while maintaining a single uniform infrastructure in place.
In these slides you will learn about 8 key decisions and steps you can take in order to safely scale-up your Kafka-based system. These include:
* How to increase dev velocity of event driven style code.
* How to optimize working with Kafka in polyglot setting
* How to support growing amount of traffic and developers.
Extending DevOps to Big Data Applications with KubernetesNicola Ferraro
DevOps, continuous delivery and modern architectural trends can incredibly speed up the software development process. Big Data applications cannot be an exception and need to keep the same pace.
Event Sourcing, Stream Processing and Serverless (Benjamin Stopford, Confluen...confluent
In this talk we’ll look at the relationship between three of the most disruptive software engineering paradigms: event sourcing, stream processing and serverless. We’ll debunk some of the myths around event sourcing. We’ll look at the inevitability of event-driven programming in the serverless space and we’ll see how stream processing links these two concepts together with a single ‘database for events’. As the story unfolds we’ll dive into some use cases, examine the practicalities of each approach-particularly the stateful elements-and finally extrapolate how their future relationship is likely to unfold. Key takeaways include: The different flavors of event sourcing and where their value lies. The difference between stream processing at application- and infrastructure-levels. The relationship between stream processors and serverless functions. The practical limits of storing data in Kafka and stream processors like KSQL."
[db tech showcase Tokyo 2017] C24:Taking off to the clouds. How to use DMS in...Insight Technology, Inc.
The presentation will discuss challenges and problems people experience during migration to a cloud. We check what set of tools AWS offers to overcome those problems and how to use AWS Database Migration Service (DMS) and Schema Conversion Tool. We will go through the process, supported engines, different modes, options and possible problems. Primarily the session is going to be focused on Oracle database migration but we also touch other engines and areas where the tool can be used.
This is the meat of the presentation, it describes in detail how do use anti-architecture to define what gets done, then discusses patterns, type systems, PaaS frameworks, services and components. There is a detailed explanation of Cassandra as a data store and open source components.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1awkL99.
Details on Pinterest's architeture, its systems -Pinball, Frontdoor-, and stack - MongoDB, Cassandra, Memcache, Redis, Flume, Kafka, EMR, Qubole, Redshift, Python, Java, Go, Nutcracker, Puppet, etc. Filmed at qconsf.com.
Yash Nelapati is an infrastructure engineer at Pinterest where he focusses on scalability, capacity planning and architecture. Prior to Pinterest he was into web development and rapidly prototyping UI. Marty Weiner joined Pinterest in early 2011 as the 2nd engineer. Previously worked at Azul Systems as a VM engineer focused on building/improving the JIT compilers in HotSpot.
Real-time Streaming Pipelines with FLaNKData Con LA
Introducing the FLaNK stack which combines Apache Flink, Apache NiFi and Apache Kafka to build fast applications for IoT, AI, rapid ingest and deploy them anywhere. I will walk through live demos and show how to do this yourself.
FLaNK provides a quick set of tools to build applications at any scale for any streaming and IoT use cases.
We will discuss a use case - Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)
Bio -
Tim Spann is an avid blogger and the Big Data Zone Leader for Dzone (https://dzone.com/users/297029/bunkertor.html). He runs the the successful Future of Data Princeton meetup with over 1200 members at http://www.meetup.com/futureofdata-princeton/. He is currently a Senior Solutions Engineer at Cloudera in the Princeton New Jersey area. You can find all the source and material behind his talks at his Github and Community blog:
https://github.com/tspannhw/ApacheDeepLearning201
https://community.hortonworks.com/users/9304/tspann.html
Embracing Database Diversity with Kafka and DebeziumFrank Lyaruu
There was a time not long ago when we used relational databases for everything. Even if the data wasn’t particularly relational, we shoehorned it into relational tables, often because that was the only database we had. Thank god these dark times are over and now we have many different kinds of NoSQL databases: Document, realtime, graph, column, but that does not solve the problem that the same data might be a graph from one perspective, but a collection of documents from another.
It would be really nice if we can access that same data in many different ways, depending on the context of what we want to achieve in our current task.
As software architects this is not easy to solve but definitely possible: We can design an architecture using Event Sourcing: Capture the data with Debezium, post it to a Kafka queue, use Kafka Streams to model the data the way we like, and store the data in various different data sources, so we can synchronize data between data sources.
using Spring and MongoDB on Cloud FoundryJoshua Long
This talk introduces how to build MongoDB applications with Spring Data MongoDB on Cloud Foundry. Spring Data provides rich support for easily building applications that work on multiple data stores.
Session Presented @IndicThreads Cloud Computing Conference, Pune, India ( http://u10.indicthreads.com )
------------
More and more Enterprises are moving their IT infrastructure to Cloud platforms. Out of the entire components, Data Storage still remains a tricky part of the puzzle. I would like to present an overview of the choices, their advantages and limitations, we as Software Developers have currently. Based upon the choices, we may need to think about the design and architecture of the data-manipulation components of the application, we plan to put on Cloud. Following is an overview of the proposed agenda:
* Existing “Cloud Capable” and “Cloud Native” Relational DBMS
* Existing “Cloud Capable” and “Cloud Native” Non-Relational DBMS
* Main differences between Relational and Non-Relational DBMS’s
* Advantages and Limitations of Relational DBMS on Cloud Platforms
* Advantages and Limitations of Non-Relational DBMS on Cloud Platforms
* Design Patterns while using Non-Relational DBMS in the application
* Code Walk-through showing Integration of “Cloud Capable” and “Cloud Native” Non-Relational DBMS with a Web-Application
Takeaways from the session
* Overview of current Market Situation w.rt. Data Storage on Cloud
* Helpful Pointers towards making the right choice of Data Storage platform
* How Non-Relational DBMS’s can be integrated into our applications
More and more Enterprises are moving their IT infrastructure to Cloud platforms. Out of the entire components, Data Storage still remains a tricky part of the puzzle. I would like to present an overview of the choices, their advantages and limitations, we as Software Developers have currently. Based upon the choices, we may need to think about the design and architecture of the data-manipulation components of the application, we plan to put on Cloud. Following is an overview of the proposed agenda:
Existing “Cloud Capable” and “Cloud Native” Relational DBMS
Existing “Cloud Capable” and “Cloud Native” Non-Relational DBMS
Main differences between Relational and Non-Relational DBMS’s
Advantages and Limitations of Relational DBMS on Cloud Platforms
Advantages and Limitations of Non-Relational DBMS on Cloud Platforms
Design Patterns while using Non-Relational DBMS in the application
Code Walk-through showing Integration of “Cloud Capable” and “Cloud Native” Non-Relational DBMS with a Web-Application
The Spring framework packs a lot of punch, out of the box! The surface-level component model's extraordinarily flexible, and works well with in most situations, but the real power of Spring lays just underneath, in the numerous SPIs that Spring exposes, so that you can tailor the component model to your own use cases. Spring's SPI's are a great example of what Bob Martin describes as the open-closed principle, and it provides the solid underpinnings upon which the other Spring frameworks, including Spring Integration, Spring MVC and Spring Batch are built. In this talk, Josh Long, the Spring developer advocate from SpringSource, provides a walking tour of Spring's extension points.
Building A Scalable Open Source Storage SolutionPhil Cryer
The Biodiversity Heritage Library (BHL), like many other projects within biodiversity informatics, maintains terabytes of data that must be safeguarded against loss. Further, a scalable and resilient infrastructure is required to enable continuous data interoperability, as BHL provides unique services to its community of users. This volume of data and associated availability requirements present significant challenges to a distributed organization like BHL, not only in funding capital equipment purchases, but also in ongoing system administration and maintenance. A new standardized system is required to bring new opportunities to collaborate on distributed services and processing across what will be geographically dispersed nodes. Such services and processing include taxon name finding, indexes or GUID/LSID services, distributed text mining, names reconciliation and other computationally intensive tasks, or tasks with high availability requirements.
AFCEA C4I Symposium: The 4th C in C4I Stands for Cloud:Factors Driving Adopti...Patrick Chanezon
Computer systems architecture evolve in cycles every 15-20 years, oscillating between centralization and decentralization: centralized mainframes of the 60s, decentralized PCs of the 80s, centralized web apps of the 90s. Since 2010, we see a new architecture shift back to the 80's client-server model, with 3 trends: powerful mobile device (android, iphone), the browser becoming a rich client platform with html5, and cloud platforms commoditizing distributed computing on the server. This talk is about the server side of the current architecture shift.
As most technology architecture changes, cloud computing adoption is driven by factors from multiple dimensions, not only technical ones:
- technology: Big Data & fast networks, shift from vertical to horizontal scalability, commoditization of distributed computing (Virtualization, Sharding, Storage, NoSQL databases, Paxos, Map/Reduce, Go language), centralization of security
- economy: broadband and wireless ubiquity, shift from product to services, economies of scale, Moore's law, cost of electricity becoming main driver for computing cost , pay as you go models
- culture: consumerization of enterprise technology, technology achieves ubiquity by disappearing
20 years ago when I was involved with Command and Control Systems for the french DoD, they were called C3I. Since then it seems they added a C for Computers, C4I. Maybe for the next 20 years the 4th C of C4I should stand for Cloud.
Hyves: Mobile app development with HTML5 and Javascriptnlwebperf
These are the slides of Emiels presentation about how Hyves supports multiple mobile frameworks with minimal effort by use of html5 and javascript. Topics are mobile architecture, buildsystems, testing frameworks and how Hyves uses phonegap.
NoSQL is not a buzzword anymore. The array of non- relational technologies have found wide-scale adoption even in non-Internet scale focus areas. With the advent of the Cloud...the churn has increased even more yet there is no crystal clear guidance on adoption techniques and architectural choices surrounding the plethora of options available. This session initiates you into the whys & wherefores, architectural patterns, caveats and techniques that will augment your decision making process & boost your perception of architecting scalable, fault-tolerant & distributed solutions.
Similar to CloudFoundry and MongoDb, a marriage made in heaven (20)
Kubernetes has many ways to scale your workloads, most of what we hear about is scaling our cluster up with either with vm sets or autoscaling groups. There is another way, in this talk we will look at virtual kubelet. Virual Kubelet will allow us to talk to a cloud providers container as a service platform like ACI, fargate or ECI. We will deep dive into how you can scale your applications across virtual kubelet. One issue is the kubernetes service type has is scaling to zero due to the way routing to the pod happens if there is no pod for the service to route too. Scaling our applications to zero is just as important and scaling up. We will look at projects that integrate with the horizontal pod autoscaler that fix this issue. Allowing us to not only scale our applications up but as easily down to make our cluster truly elastic.
KubeCon China 2019 - Building Apps with Containers, Functions and Managed Ser...Patrick Chanezon
Cloud native applications are composed of many technologies and components, but three canonical abstraction emerged in the past few years that help developers structure their architecture: container, functions responding to events, and managed services.
This talk will explain how to develop (Docker, local Kubernetes, virtual Kubelet, OpenFaaS), deploy (managed Kubernetes, functions and services) and package (CNAB specification and tooling) applications using these three components and look at not only deployment workflows but also at day 2 concerns that a developer would need to consider in the cloud native landscape.
We will demo every topic and a Github repository will be available for developers to reproduce the demos and learn at their own pace.
Patrick Chanezon and Scott Coulton
Dockercon 2019 Developing Apps with Containers, Functions and Cloud ServicesPatrick Chanezon
Cloud native applications are composed of containers, serverless functions and managed cloud services.
What is the best set of tools on your desktop to provide a rapid, iterative development experience and package applications using these three components?
This hand-on talk will explain how you can complement Docker Desktop, with it’s local Docker engine and Kubernetes cluster, with open source tools such as the Virtual Kubelet, Open Service Broker, the Gloo hybrid app gateway, Draft, and others, to build the most productive development inner-loop for these type of applications.
It will also cover how you can use the Cloud Native Application Bundle (CNAB) format and it’s implementation in the Docker app experimental tool to package your application and manage it with container supply chain tooling such as Docker Hub.
GIDS 2019: Developing Apps with Containers, Functions and Cloud ServicesPatrick Chanezon
Cloud native applications are increasingly composed of containers, serverless functions responding to events and managed cloud services. What is the best workflow and set of tools to provide a rapid, iterative development experience and to package applications using these three components?
This hand-on talk will compare and contrast several sets of tools and their associated workflows:
Using Docker Desktop, with its local Docker engine and Kubernetes cluster, with open source tools such as the Virtual Kubelet, or the Gloo hybrid app gateway, to build the most productive development inner-loop for these type of applications
OpenFaaS, Fn, or Nuclio open source serverless framework to run functions in containers locally
Telepresence to run a container locally, connected to a remote cluster
Helm and Draft
Knative
The talk will also cover how you can use the Cloud Native Application Bundle (CNAB) format and tools to package your applications and share them using a container registry.
Patrick Chanezon, un des pionniers du Cloud chez Google, VMware, Microsoft et Docker, vous raconte la révolution des conteneurs logiciels et comment certains concepts du taoïsme, wei-wu-wei, "agir sans agir", et ziran, naturel, ou spontanéïté, permettent d'en mieux cerner les enjeux.
Les conteneurs accélèrent l'adoption du Cloud en entreprise, avec des architectures hybride et multi cloud, la mise en place de démarches agiles et DevOps pour moderniser les applications existantes et réduire les coûts d'infrastructure, et permettent de nouveaux cas d'utilisation dans l'internet des objets et l'intelligence artificielle.
Moby is an open source project providing a "LEGO set" of dozens of components, the framework to assemble them into specialized container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
One of these assemblies is Docker CE, an open source product that lets you build, ship, and run containers.
This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios.
We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary.
Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
Video at https://www.youtube.com/watch?v=kDp22YkD6WY
Microsoft Techsummit Zurich Docker and MicrosoftPatrick Chanezon
Docker and Microsoft have been collaborating both in open source and through their commercial partnership to bring the benefits of Docker Windows and Linux containers to Azure Enterprise customers. Docker’s container platform, Docker Enterprise Edition, is used to modernize traditioal applications, and move them to Azure, as well as to develop new cloud native applications using microservices architecture, bringing agility to developers and control to IT Pros. This talk will cover the latest developments in Docker’s container platform with planned support for Kubernetes in Docker for Windows, and Docker Enterprise Edition for Azure, Docker for Azure Stack to enable hybrid cloud deployments, Windows containers, Linux containers on Windows.
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
Docker Desktop and Enterprise Edition now both include Kubernetes as an optional orchestration component. This talk will explain how to use Docker Desktop (Mac or Windows) to develop and debug a cloud native application, then how Docker Enterprise Edition helps you deploy it to Kubernetes in production.
The Docker Way: modernize traditional applications without action (wu-wei) and create new cloud native microservices applications with naturalness (ziran).
This talk also provides a summary of all the DockerCon EU 2017 announcements: Kubernetes now supported in Docker, MTA, IBM partnership.
Building specialized container-based systems with Moby: a few use cases
This talk will explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud or bare metal scenarios. We will cover Moby itself, the framework, and tooling around the project, as well as many of it’s components: LinuxKit, InfraKit, containerd, SwarmKit, Notary. Then we will present a few use cases and demos of how different companies have leveraged Moby and some of the Moby components to create their own container-based systems.
Docker Cap Gemini CloudXperience 2017 - la revolution des conteneurs logicielsPatrick Chanezon
Si vous avez raté le début : Patrick Chanezon, un des pionniers du Cloud chez Google, VMware, Microsoft et Docker, vous raconte la révolution des conteneurs logiciels en quelques films ; comment ils accélèrent l'adoption du Cloud en entreprise, avec des architectures hybride et multi, la mise en place de démarches agiles et DevOps pour moderniser les applications existantes et réduire les coûts d'infrastructure, et permettent de nouveaux cas d'utilisation dans l'internet des objets et l'intelligence artificielle.
En bref, comment expliquer la stratégie des opérateurs du Cloud avec des films de science- fiction ? C’est le défi que va relever Patrick Chanezon, évangéliste chez Docker.
Docker moves very fast, with an edge channel released every month and a stable release every 3 months. Patrick will talk about how Docker introduced Docker EE and a certification program for containers and plugins with Docker CE and EE 17.03 (from March), the announcements from DockerCon (April), and the many new features planned for Docker CE 17.05 in May.
This talk will be about what's new in Docker and what's next on the roadmap
Oscon 2017: Build your own container-based system with the Moby projectPatrick Chanezon
Build your own container-based system
with the Moby project
Docker Community Edition—an open source product that lets you build, ship, and run containers—is an assembly of modular components built from an upstream open source project called Moby. Moby provides a “Lego set” of dozens of components, the framework for assembling them into specialized container-based systems, and a place for all container enthusiasts to experiment and exchange ideas.
Patrick Chanezon and Mindy Preston explain how you can leverage the Moby project to assemble your own specialized container-based system, whether for IoT, cloud, or bare-metal scenarios. Patrick and Mindy explore Moby’s framework, components, and tooling, focusing on two components: LinuxKit, a toolkit to build container-based Linux subsystems that are secure, lean, and portable, and InfraKit, a toolkit for creating and managing declarative, self-healing infrastructure. Along the way, they demo how to use Moby, LinuxKit, InfraKit, and other components to quickly assemble full-blown container-based systems for several use cases and deploy them on various infrastructures.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
2. P@ in a nutshell
• French, based in San Francisco
• Senior Director, Developer Relations,VMware
• Software Plumber, API guy, mix of Enterprise and
Consumer
• 18 years writing software, backend guy with a
taste for javascript
• 2 y Accenture (Notes guru), 3 y Netscape/AOL
(Servers, Portals), 5 y Sun (ecommerce, blogs,
Portals, feeds, open source)
• 6 years at Google, API guy (first hired, helped start the
team)
• Adwords, Checkout, Social, HTML5, Cloud
Friday, December 9, 11
4. Spring’s aim:
bring simplicity to java development
data
web tier integration
batch access
& service tier & mobile
processing / NoSQL /
RIA messaging
Big Data
The Spring framework
the cloud: lightweight traditional
CloudFoundry WebSphere
tc Server
VMForce JBoss AS
Tomcat
Google App Engine WebLogic
Jetty
Amazon Web Services (on legacy versions, too!)
4
Friday, December 9, 11
8. New demands on data
access
•... until we needed
inexpensive horizontal
scaling for some large web
based applications ...
• ... and we needed to
deploy our apps in the
cloud ...
* image courtesy of Bitcurrent 8
Friday, December 9, 11
9. NoSQL offers several data store categories
Key-Value Column Document Graph
Redis, Cassandra, MongoDB Neo4J
Riak HBase
9
Friday, December 9, 11
10. NoSQL offers several data store categories
Key-Value Column Document Graph
MongoDB
(who cares about the rest?)
10
Friday, December 9, 11
11. Spring Framework
built-in data access support
•Transaction abstractions
•Common data access exception hierarchy
•JDBC - JdbcTemplate
•ORM - Hibernate, JPA support
•OXM - Object to XML mapping
•Serializer/Deserializer strategies (Spring 3.0)
•Cache support (Spring 3.1)
11
Friday, December 9, 11
12. http://www.springsource.org/spring-data
•Spring Data Key-value
•Spring Data Document
•Spring Data Graph
•Spring Data Column
•Spring Data Blob
•Spring Data JPA Repository / JDBC Extensions
•Spring Gemfire / Spring Hadoop ...
•Grails iNcOnSeQuentiaL
12
Friday, December 9, 11
13. Spring Data Building Blocks
•Low level data access APIs
✓MongoTemplate, RedisTemplate ...
•Object Mapping (Java and GORM)
•Cross Store Persistence Programming model
•Generic Repository support
•Productivity support in Roo and Grails
13
Friday, December 9, 11
15. Spring Data Document
Mongo
•MongoTemplate interface for mapping Mongo documents
•MongoConverter
•SimpleMongoConverter for basic POJO mapping support
•Leverage Spring 3.0 TypeConverters and SpEL
•Exception translation
•Advanced Mapping(@Document, @Id, @DbRef)
•Annotation based
•MongoRepository
•Built on Hades support for JPA Repositories
15
Friday, December 9, 11
17. Mongo Template
Direct Usage of the Mongo Template:
17
Friday, December 9, 11
18. Mongo Template
Direct Usage of the Mongo Template:
Insert into “Person”
Collection
17
Friday, December 9, 11
19. Mongo Template
Direct Usage of the Mongo Template:
findOne using query: { "name" : "Joe"}
in db.collection: database.Person
17
Friday, December 9, 11
20. Mongo Template
Direct Usage of the Mongo Template:
Dropped collection [database.person]
17
Friday, December 9, 11
21. Generic Repository
Interface for generic CRUD operations on a repository for a specific type
18
Friday, December 9, 11
22. Paging and Sorting Repository
Paging and Sorting Repository: Extends “CrudRepository”
19
Friday, December 9, 11
23. Paging and Sorting Repository
Paging and Sorting Repository: Extends “CrudRepository”
Usage:
19
Friday, December 9, 11
35. Cloud Foundry: The Open PaaS
• Open Source: Apache 2 Licensed
• multi language/frameworks
• multi services
• multi cloud
Ap
ce
pli
vFabric
rfa
Postgres Private
ca
nte
Data
Services tio Clouds
rI
n
e
Se
Public
vid
vFabric
RabbitMQTM rvi
Cloud
ro
Msg Services
ce
dP
Micro
ou
Other
Cloud
Cl
Services
25
Friday, December 9, 11
42. What is a Micro Cloud?
Or
Entire Cloud Running inside of a single VM
32
Friday, December 9, 11
43. Micro Cloud Foundry… (BETA)
A pre-built Micro (Single VM) version of Cloud Foundry…
You need a Cloud Foundry.com Account to use Micro Cloud Foundry
Signup @ http://cloudfoundry.com/micro
33
Friday, December 9, 11
44. Micro Cloud Foundry… (BETA)
A pre-built Micro (Single VM) version of Cloud Foundry…
Micro
You need a Cloud Foundry.com Account to use Micro Cloud Foundry
Signup @ http://cloudfoundry.com/micro
33
Friday, December 9, 11
45. Pre-requisites
Resources
Minimum 1 GB Minimum 8 GB Internet Connectivity
RAM Disk (w/DHCP is ideal)
Virtualization
Clients
VMC STS
Command line GUI
34
Friday, December 9, 11
46. What is in Micro Cloud Foundry?
.COM
Dynamic Updating DNS
App Instances Services
Open source Platform as a Service project
10.04
35
Friday, December 9, 11
47. Other Cloud Foundry powered PaaS
Private PaaS
Added Python and Perl
Public PaaS
Added PHP
36
Friday, December 9, 11
51. Service Creation and Binding
App Instance Redis Service
39
Friday, December 9, 11
52. Service Creation and Binding
App Instance Redis Service
39
Friday, December 9, 11
53. Service Creation and Binding
App Instance Redis Service
MongoDB
Service
39
Friday, December 9, 11
54. Development
LifeCycle
40
Friday, December 9, 11
55. Traditional App Deploy and Request/Response
Web
Request/Allocate
Web Build/Setup
Install/Configure
App
Deploy/Test
App
Scale?
Upgrade?
DB
DB
Update?
Friday, December 9, 11
56. How Apps are Deployed on Cloud Foundry
Web
Web
App
“vmc push MyApp” DB
App
Web
Scale? “vmc instances MyApp 5”
DB Upgrade? “vmc map MyApp MyApp2”
Update? “vmc update MyApp”
Friday, December 9, 11
57. How Apps are Deployed on Cloud Foundry
Web
Web
App
“vmc push MyApp” DB
App
Web
Scale? “vmc instances MyApp 5”
DB Upgrade? “vmc map MyApp MyApp2”
Update? “vmc update MyApp”
Friday, December 9, 11
58. How Apps are Accessed on Cloud Foundry
Request Web Interface
Load
Balancing
Response
and
Routing
Web
App
App Instance
DB
“vmc push MyApp”
Service
Friday, December 9, 11
59. How Apps are Scaled on Cloud Foundry
Request
Load
Load
Load
Balancer(s)
Balancing Response
Balancer(s)
and
Routing
App Instances
Web Web Web
App App App
DB
“vmc instances MyApp 3”
Service
Friday, December 9, 11
60. How Apps are Updated on Cloud Foundry
Previous Instance Updated New
Version Stopped Code Version
Web Web Web Web
App App App App
DB DB
Service Service
“vmc update MyApp”
Friday, December 9, 11
61. At Scale – Multi-Node Distributed App
system load balancer
elastic pool
redis mysql
front_end front_end
rabbitMQ
elastic pool
mongodb
back_end
46 10
Friday, December 9, 11
67. Where to Find More
§ Spring Data Project:
http://bit.ly/spring-data
§ CloudFoundry Samples:
http://bit.ly/cloudfoundry-samples
§ MicroCloud Foundry for Spring Developers
http://bit.ly/mcf4spring
§ Spring Data Mongo on Cloud Foundry (webinar, 12/01/2011)
§ http://bit.ly/spring-mongo-cloudfoundry
52
Friday, December 9, 11