Microservices with Java, Spring Boot and Spring CloudEberhard Wolff
Spring Boot makes creating small Java application easy - and also facilitates operations and deployment. But for Microservices need more: Because Microservices are a distributed systems issues like Service Discovery or Load Balancing must be solved. Spring Cloud adds those capabilities to Spring Boot using e.g. the Netflix stack. This talks covers Spring Boot and Spring Cloud and shows how these technologies can be used to create a complete Microservices environment.
Flink Forward San Francisco 2022.
The Table API is one of the most actively developed components of Flink in recent time. Inspired by databases and SQL, it encapsulates concepts many developers are familiar with. It can be used with both bounded and unbounded streams in a unified way. But from afar it can be difficult to keep track of what this API is capable of and how it relates to Flink's other APIs. In this talk, we will explore the current state of Table API. We will show how it can be used as a batch processor, a changelog processor, or a streaming ETL tool with many built-in functions and operators for deduplicating, joining, and aggregating data. By comparing it to the DataStream API we will highlight differences and elaborate on when to use which API. We will demonstrate hybrid pipelines in which both APIs interact with one another and contribute their unique strengths. Finally, we will take a look at some of the most recent additions as a first step to stateful upgrades.
by
David Andreson
Valeri Karpov will lead a workshop on async/await that includes two exercises. The schedule includes introductions to key concepts like return values, error handling, and loops/conditionals with async functions. Attendees will practice gathering blog post comments and retrying failed requests. Key takeaways are that async functions return promises, return resolves them while throw rejects them, and await pauses execution until a promise settles.
This document discusses secure session management and common session security issues. It explains that capturing a user's session allows an attacker to act as that user. Sessions need to be properly terminated on logout to prevent replay attacks. Weaknesses like cookies set before authentication, non-random session IDs, and failing to remove sessions on logout can enable session hijacking. The document provides guidelines for generating secure random session IDs, setting cookies only after authentication, removing sessions on logout, and using HTTPS to mitigate these risks.
This document introduces Jest, a JavaScript testing framework. It discusses why Jest is useful, including that it runs tests in parallel sandboxed environments, provides a fast feedback loop with rich logging and error outputs, and acts as a one-stop shop for testing. The document also covers anatomy of Jest tests, how to use mocking, tips like resetting modules between tests and snapshot testing, and references for additional Jest resources.
Microservice With Spring Boot and Spring CloudEberhard Wolff
Spring Boot and Spring Cloud are an ideal foundation for creating Microservices based on Java. This presentation explains basic concepts of these libraries.
Microservices with Java, Spring Boot and Spring CloudEberhard Wolff
Spring Boot makes creating small Java application easy - and also facilitates operations and deployment. But for Microservices need more: Because Microservices are a distributed systems issues like Service Discovery or Load Balancing must be solved. Spring Cloud adds those capabilities to Spring Boot using e.g. the Netflix stack. This talks covers Spring Boot and Spring Cloud and shows how these technologies can be used to create a complete Microservices environment.
Flink Forward San Francisco 2022.
The Table API is one of the most actively developed components of Flink in recent time. Inspired by databases and SQL, it encapsulates concepts many developers are familiar with. It can be used with both bounded and unbounded streams in a unified way. But from afar it can be difficult to keep track of what this API is capable of and how it relates to Flink's other APIs. In this talk, we will explore the current state of Table API. We will show how it can be used as a batch processor, a changelog processor, or a streaming ETL tool with many built-in functions and operators for deduplicating, joining, and aggregating data. By comparing it to the DataStream API we will highlight differences and elaborate on when to use which API. We will demonstrate hybrid pipelines in which both APIs interact with one another and contribute their unique strengths. Finally, we will take a look at some of the most recent additions as a first step to stateful upgrades.
by
David Andreson
Valeri Karpov will lead a workshop on async/await that includes two exercises. The schedule includes introductions to key concepts like return values, error handling, and loops/conditionals with async functions. Attendees will practice gathering blog post comments and retrying failed requests. Key takeaways are that async functions return promises, return resolves them while throw rejects them, and await pauses execution until a promise settles.
This document discusses secure session management and common session security issues. It explains that capturing a user's session allows an attacker to act as that user. Sessions need to be properly terminated on logout to prevent replay attacks. Weaknesses like cookies set before authentication, non-random session IDs, and failing to remove sessions on logout can enable session hijacking. The document provides guidelines for generating secure random session IDs, setting cookies only after authentication, removing sessions on logout, and using HTTPS to mitigate these risks.
This document introduces Jest, a JavaScript testing framework. It discusses why Jest is useful, including that it runs tests in parallel sandboxed environments, provides a fast feedback loop with rich logging and error outputs, and acts as a one-stop shop for testing. The document also covers anatomy of Jest tests, how to use mocking, tips like resetting modules between tests and snapshot testing, and references for additional Jest resources.
Microservice With Spring Boot and Spring CloudEberhard Wolff
Spring Boot and Spring Cloud are an ideal foundation for creating Microservices based on Java. This presentation explains basic concepts of these libraries.
Spring Data is a high level SpringSource project whose purpose is to unify and ease the access to different kinds of persistence stores, both relational database systems and NoSQL data stores.
The document discusses microservices architecture and how to implement it using Spring Boot and Spring Cloud. It describes how microservices address challenges with monolithic architectures like scalability and innovation. It then covers how to create a microservices-based application using Spring Boot, register services with Eureka, communicate between services using RestTemplate and Feign, and load balance with Ribbon.
Near real-time statistical modeling and anomaly detection using Flink!Flink Forward
Flink Forward San Francisco 2022.
At ThousandEyes we receive billions of events every day that allow us to monitor the internet; the most important aspect of our platform is to detect outages and anomalies that have a potential to cause serious impact to customer applications and user experience. Automatic detection of such events at lowest latency and highest accuracy is extremely important for our customers and their business. After launching several resilient and low latency data pipelines in production using Flink we decided to take it up a notch; we leveraged Flink to build statistical models in near real-time and apply them on incoming stream of events to detect anomalies! In this session we will deep dive into the design as well as discuss pitfalls and learnings while developing our real-time platform that leverages Debezium, Kafka, Flink, ElasticCache and DynamoDB to process events at scale!
by
Kunal Umrigar & Balint Kurnasz
Spring Boot is a framework that makes it easy to create stand-alone, production-grade Spring based Applications that can be "just run". It takes an opinionated view of the Spring platform and third-party libraries so that new and existing Spring developers can quickly get started with minimal configuration. Key features include automatic configuration of Spring, embedded HTTP servers, starters for common dependencies, and monitoring endpoints.
1) The document provides guidance on testing APIs for security weaknesses, including enumerating the attack surface, common tools to use, what to test for (e.g. authentication, authorization, injections), and demo apps to practice on.
2) It recommends testing authentication and authorization mechanisms like tokens, injections attacks on state-changing requests, and how data is consumed client-side.
3) The document also discusses testing for denial of service conditions, data smuggling through middleware, API rate limiting, and cross-origin requests.
Understanding MicroSERVICE Architecture with Java & Spring BootKashif Ali Siddiqui
This is a deep journey into the realm of "microservice architecture", and in that I will try to cover each inch of it, but with a fixed tech stack of Java with Spring Cloud. Hence in the end, you will be get know each and every aspect of this distributed design, and will develop an understanding of each and every concern regarding distributed system construct.
TypeScript lets you write JavaScript the way you really want to. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. TypeScript is pure object oriented with classes, interfaces and statically typed like C# or Java. The popular JavaScript framework Angular 2.0 is written in TypeScript. Mastering TypeScript can help programmers to write object-oriented programs and have them compiled to JavaScript, both on server side and client side.
DAST in CI/CD pipelines using Selenium & OWASP ZAPsrini0x00
- The document discusses integrating the OWASP ZAP web application security scanner with Selenium automated tests to improve vulnerability coverage during dynamic application security testing (DAST).
- It proposes proxying Selenium test traffic through ZAP to perform passive scanning, then triggering an active ZAP scan via API during the continuous integration/deployment pipeline.
- Scan reports can be retrieved in various formats and findings imported into a vulnerability management system. A demonstration is provided.
The document discusses message brokers and Apache Kafka. It defines a message broker as middleware that exchanges messages in computer networks. It then discusses how message brokers work using queuing and publish-subscribe models. The document focuses on describing Apache Kafka, a distributed streaming platform. It explains key Kafka concepts like topics, partitions, logs, producers, consumers, and guarantees around ordering and replication. It also discusses how Zookeeper is used to manage and monitor the Kafka cluster.
This document discusses asynchronous programming using async/await in C#. It covers why multithreading is important, how to use async/await to offload work or scale applications, and how to properly structure asynchronous code. The key points are: async/await provides an easier way to write multithreaded code compared to previous approaches; methods should be marked async if they use await; and tasks can be used to start asynchronous work and wait for completion in a non-blocking way.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Spring Cloud Config provides a centralized way to manage external configuration for distributed systems. The Config Server stores configuration in Git repositories and makes it available via REST APIs to client applications. Clients can bind to the Config Server to initialize their Spring Environment with remote property sources. The default storage backend uses Git, allowing version control and tooling support. The Config Server serves configuration properties and YAML files from Git or HashiCorp Vault. It maps request paths to files in sources by application, profile, and label. Client applications can encrypt/decrypt values.
Ben McCormick gave a presentation on how to save time by testing with Jest. He began with an introduction and explained that Jest is a JavaScript testing framework developed by Facebook that aims to solve common testing problems. He then demonstrated how Jest saves time through fast setup, writing tests quickly using familiar syntax and APIs, running tests in parallel and with a smart watch mode, and providing clear errors to fix tests fast. He concluded with a demo of Jest's features and took questions.
The document discusses recommendations for preventing brute force attacks. It defines a brute force attack as using an automatic process to determine a password or username through all possible combinations. It recommends using CAPTCHAs rather than delaying login attempts, as delays can overload servers with sleep processes and hackers can bypass delays using multiple virtual IPs. CAPTCHAs are a better technique for distinguishing humans from computers to avoid overwhelming servers with attack traffic.
The document discusses security misconfiguration as the sixth most dangerous web application vulnerability according to the OWASP Top 10. It defines security misconfiguration as improper configuration settings that can enable attacks. The document outlines how attackers exploit default passwords and privileges, and provides examples of misconfigured systems. It recommends ways to prevent misconfiguration like changing defaults, deleting unnecessary accounts, and keeping systems updated. The document demonstrates how to detect hidden URLs and directory listings using Burp Suite and concludes that misconfiguration poses a high risk if not properly safeguarded against.
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
RESTful API Testing using Postman, Newman, and JenkinsQASymphony
INCLUDE AUTOMATED RESTFUL API TESTING USING POSTMAN, NEWMAN, AND JENKINS
If you’re going to automate one kind of tests at your company, API testing is the perfect place to start! It’s fast and simple to write as well as fast to execute. If your company writes an API for its software, then you understand the need and importance of testing it. In this webinar, we’ll do a live demonstration of how you can use free tools, such as Postman, Newman, and Jenkins, to enhance your software quality and security.
Elise Carmichael will cover:
Why your API tests should be included with your CI
Real examples using Postman, Newman and Jenkins + Newman
An active Q&A where you can get your automated testing questions answered, live!
To get the most out of this session:
Download these free tools prior to the webinar: Postman, Newman (along with node and npm) and Jenkins
Read up on how to parse JSON objects using javascript
*Can’t attend the webinar live? Register and we will send the recording after the webinar is over.
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Spring and Pivotal Application Service - SpringOne Tour - BostonVMware Tanzu
This document discusses Spring and Pivotal Application Service (PAS). It notes that PAS provides market-leading support for Spring technologies and an ecosystem of services for Spring applications. It covers why developers use Spring and PAS, how PAS supports Spring features like Boot, Security, and Cloud, and the services available on PAS like MySQL, RabbitMQ, and Redis. It concludes with next steps around contacting an account team, trying hosted PAS software, and signing up for roadmap calls.
Spring Data is a high level SpringSource project whose purpose is to unify and ease the access to different kinds of persistence stores, both relational database systems and NoSQL data stores.
The document discusses microservices architecture and how to implement it using Spring Boot and Spring Cloud. It describes how microservices address challenges with monolithic architectures like scalability and innovation. It then covers how to create a microservices-based application using Spring Boot, register services with Eureka, communicate between services using RestTemplate and Feign, and load balance with Ribbon.
Near real-time statistical modeling and anomaly detection using Flink!Flink Forward
Flink Forward San Francisco 2022.
At ThousandEyes we receive billions of events every day that allow us to monitor the internet; the most important aspect of our platform is to detect outages and anomalies that have a potential to cause serious impact to customer applications and user experience. Automatic detection of such events at lowest latency and highest accuracy is extremely important for our customers and their business. After launching several resilient and low latency data pipelines in production using Flink we decided to take it up a notch; we leveraged Flink to build statistical models in near real-time and apply them on incoming stream of events to detect anomalies! In this session we will deep dive into the design as well as discuss pitfalls and learnings while developing our real-time platform that leverages Debezium, Kafka, Flink, ElasticCache and DynamoDB to process events at scale!
by
Kunal Umrigar & Balint Kurnasz
Spring Boot is a framework that makes it easy to create stand-alone, production-grade Spring based Applications that can be "just run". It takes an opinionated view of the Spring platform and third-party libraries so that new and existing Spring developers can quickly get started with minimal configuration. Key features include automatic configuration of Spring, embedded HTTP servers, starters for common dependencies, and monitoring endpoints.
1) The document provides guidance on testing APIs for security weaknesses, including enumerating the attack surface, common tools to use, what to test for (e.g. authentication, authorization, injections), and demo apps to practice on.
2) It recommends testing authentication and authorization mechanisms like tokens, injections attacks on state-changing requests, and how data is consumed client-side.
3) The document also discusses testing for denial of service conditions, data smuggling through middleware, API rate limiting, and cross-origin requests.
Understanding MicroSERVICE Architecture with Java & Spring BootKashif Ali Siddiqui
This is a deep journey into the realm of "microservice architecture", and in that I will try to cover each inch of it, but with a fixed tech stack of Java with Spring Cloud. Hence in the end, you will be get know each and every aspect of this distributed design, and will develop an understanding of each and every concern regarding distributed system construct.
TypeScript lets you write JavaScript the way you really want to. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. TypeScript is pure object oriented with classes, interfaces and statically typed like C# or Java. The popular JavaScript framework Angular 2.0 is written in TypeScript. Mastering TypeScript can help programmers to write object-oriented programs and have them compiled to JavaScript, both on server side and client side.
DAST in CI/CD pipelines using Selenium & OWASP ZAPsrini0x00
- The document discusses integrating the OWASP ZAP web application security scanner with Selenium automated tests to improve vulnerability coverage during dynamic application security testing (DAST).
- It proposes proxying Selenium test traffic through ZAP to perform passive scanning, then triggering an active ZAP scan via API during the continuous integration/deployment pipeline.
- Scan reports can be retrieved in various formats and findings imported into a vulnerability management system. A demonstration is provided.
The document discusses message brokers and Apache Kafka. It defines a message broker as middleware that exchanges messages in computer networks. It then discusses how message brokers work using queuing and publish-subscribe models. The document focuses on describing Apache Kafka, a distributed streaming platform. It explains key Kafka concepts like topics, partitions, logs, producers, consumers, and guarantees around ordering and replication. It also discusses how Zookeeper is used to manage and monitor the Kafka cluster.
This document discusses asynchronous programming using async/await in C#. It covers why multithreading is important, how to use async/await to offload work or scale applications, and how to properly structure asynchronous code. The key points are: async/await provides an easier way to write multithreaded code compared to previous approaches; methods should be marked async if they use await; and tasks can be used to start asynchronous work and wait for completion in a non-blocking way.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Spring Cloud Config provides a centralized way to manage external configuration for distributed systems. The Config Server stores configuration in Git repositories and makes it available via REST APIs to client applications. Clients can bind to the Config Server to initialize their Spring Environment with remote property sources. The default storage backend uses Git, allowing version control and tooling support. The Config Server serves configuration properties and YAML files from Git or HashiCorp Vault. It maps request paths to files in sources by application, profile, and label. Client applications can encrypt/decrypt values.
Ben McCormick gave a presentation on how to save time by testing with Jest. He began with an introduction and explained that Jest is a JavaScript testing framework developed by Facebook that aims to solve common testing problems. He then demonstrated how Jest saves time through fast setup, writing tests quickly using familiar syntax and APIs, running tests in parallel and with a smart watch mode, and providing clear errors to fix tests fast. He concluded with a demo of Jest's features and took questions.
The document discusses recommendations for preventing brute force attacks. It defines a brute force attack as using an automatic process to determine a password or username through all possible combinations. It recommends using CAPTCHAs rather than delaying login attempts, as delays can overload servers with sleep processes and hackers can bypass delays using multiple virtual IPs. CAPTCHAs are a better technique for distinguishing humans from computers to avoid overwhelming servers with attack traffic.
The document discusses security misconfiguration as the sixth most dangerous web application vulnerability according to the OWASP Top 10. It defines security misconfiguration as improper configuration settings that can enable attacks. The document outlines how attackers exploit default passwords and privileges, and provides examples of misconfigured systems. It recommends ways to prevent misconfiguration like changing defaults, deleting unnecessary accounts, and keeping systems updated. The document demonstrates how to detect hidden URLs and directory listings using Burp Suite and concludes that misconfiguration poses a high risk if not properly safeguarded against.
Using the New Apache Flink Kubernetes Operator in a Production DeploymentFlink Forward
Flink Forward San Francisco 2022.
Running natively on Kubernetes, using the new Apache Flink Kubernetes Operator is a great way to deploy and manage Flink application and session deployments. In this presentation, we provide: - A brief overview of Kubernetes operators and their benefits. - Introduce the five levels of the operator maturity model. - Introduce the newly released Apache Flink Kubernetes Operator and FlinkDeployment CRs - Dockerfile modifications you can make to swap out UBI images and Java of the underlying Flink Operator container - Enhancements we're making in: - Versioning/Upgradeability/Stability - Security - Demo of the Apache Flink Operator in-action, with a technical preview of an upcoming product using the Flink Kubernetes Operator. - Lessons learned - Q&A
by
James Busche & Ted Chang
RESTful API Testing using Postman, Newman, and JenkinsQASymphony
INCLUDE AUTOMATED RESTFUL API TESTING USING POSTMAN, NEWMAN, AND JENKINS
If you’re going to automate one kind of tests at your company, API testing is the perfect place to start! It’s fast and simple to write as well as fast to execute. If your company writes an API for its software, then you understand the need and importance of testing it. In this webinar, we’ll do a live demonstration of how you can use free tools, such as Postman, Newman, and Jenkins, to enhance your software quality and security.
Elise Carmichael will cover:
Why your API tests should be included with your CI
Real examples using Postman, Newman and Jenkins + Newman
An active Q&A where you can get your automated testing questions answered, live!
To get the most out of this session:
Download these free tools prior to the webinar: Postman, Newman (along with node and npm) and Jenkins
Read up on how to parse JSON objects using javascript
*Can’t attend the webinar live? Register and we will send the recording after the webinar is over.
Flink Forward San Francisco 2022.
This talk will take you on the long journey of Apache Flink into the cloud-native era. It started all the way from where Hadoop and YARN were the standard way of deploying and operating data applications.
We're going to deep dive into the cloud-native set of principles and how they map to the Apache Flink internals and recent improvements. We'll cover fast checkpointing, fault tolerance, resource elasticity, minimal infrastructure dependencies, industry-standard tooling, ease of deployment and declarative APIs.
After this talk you'll get a broader understanding of the operational requirements for a modern streaming application and where the current limits are.
by
David Moravek
Spring and Pivotal Application Service - SpringOne Tour - BostonVMware Tanzu
This document discusses Spring and Pivotal Application Service (PAS). It notes that PAS provides market-leading support for Spring technologies and an ecosystem of services for Spring applications. It covers why developers use Spring and PAS, how PAS supports Spring features like Boot, Security, and Cloud, and the services available on PAS like MySQL, RabbitMQ, and Redis. It concludes with next steps around contacting an account team, trying hosted PAS software, and signing up for roadmap calls.
Spring and Pivotal Application Service - SpringOne Tour DallasVMware Tanzu
Spring and Pivotal Application Service (PAS) provide a market-leading platform for developing and deploying Spring applications on cloud-native technologies. PAS offers robust support for Spring technologies, a growing ecosystem of services for Spring apps, and tools to improve development productivity and application observability. Next steps include contacting an account team, trying hosted PAS, or signing up for the next product roadmap call.
This document summarizes a presentation about Spring and Pivotal Application Service (PAS). It discusses why developers use Spring and PAS, the market-leading Spring support in PAS, and the ecosystem of services available for Spring applications on PAS. It also provides an agenda that covers these topics and next steps.
Spring Boot & Spring Cloud on Pivotal Application Service - Alexandre RomanVMware Tanzu
- The document discusses how Pivotal Cloud Foundry (PCF) helps developers run Spring applications at scale through features like the Java Buildpack, Spring deployment profiles, Spring Cloud Connector, and Spring Cloud Services for service discovery, configuration, and circuit breaking.
- It also outlines the ecosystem of services on PCF for Spring apps, including Pivotal Cloud Cache, MySQL for PCF, RabbitMQ for PCF, and Redis for PCF.
- The presentation concludes with a demo of pushing a Spring Boot app to PCF, observing logs, binding services, and using Spring Cloud features.
Spring Cloud Services with Pivotal Cloud Foundry- Gokhan GoksuVMware Tanzu
- Pivotal Cloud Foundry (PCF) is a cloud application platform that supports Spring applications. It provides automated deployment of Spring and Spring Boot apps along with a services ecosystem.
- Spring Cloud Services (SCS) provides services for PCF like service registry, configuration management, and circuit breakers that integrate with Spring apps. It includes tools to manage credentials and integrate apps with services.
- The document discusses how PCF supports developers through services, buildpacks, and automation to deploy Spring apps and discusses integrating apps with services through SCS. It also provides an agenda for a demo of deploying Spring apps on PCF.
Eseguire Applicazioni Cloud-Native con Pivotal Cloud Foundry su Google Cloud ...VMware Tanzu
Eseguire Applicazioni Cloud-Native con Pivotal Cloud Foundry su Google Cloud Platform (Pivotal Cloud-Native Workshop: Milan)
Fabio Marinelli
7 February 2018
Pivoting Spring XD to Spring Cloud Data Flow with Sabby AnandanPivotalOpenSourceHub
Pivoting Spring XD to Spring Cloud Data Flow: A microservice based architecture for stream processing
Microservice based architectures are not just for distributed web applications! They are also a powerful approach for creating distributed stream processing applications. Spring Cloud Data Flow enables you to create and orchestrate standalone executable applications that communicate over messaging middleware such as Kafka and RabbitMQ that when run together, form a distributed stream processing application. This allows you to scale, version and operationalize stream processing applications following microservice based patterns and practices on a variety of runtime platforms such as Cloud Foundry, Apache YARN and others.
About Sabby Anandan
Sabby Anandan is a Product Manager at Pivotal. Sabby is focused on building products that eliminate the barriers between application development, cloud, and big data.
Moderne Serverless-Computing-Plattformen sind in aller Munde und stellen ein Programmiermodell zur Verfügung, wo sich der Nutzer keine Gedanken mehr über die Administration der Server, Storage, Netzwerk, virtuelle Maschinen, Hochverfügbarkeit und Skalierbarkeit machen brauch, sondern sich auf das Schreiben von eigenen Code konzentriert. Der Code bildet die Geschäftsanforderungen modular in Form von kleinen Funktionspaketen (Functions) ab. Functions sind das Herzstück der Serverless-Computing-Plattform. Sie lesen von der (oft Standard-)Eingabe, tätigen ihre Berechnungen und erzeugen eine Ausgabe. Die zu speichernden Ergebnisse von Funktionen werden in einem permanenten Datastore abgelegt, wie z.B. der Autonomous Database gespeichert. Die Autonomous Database besitzt folgende drei Eigenschaften self-driving, self-repairing und self-securing, die für einen modernen Anwendungsentwicklungsansatz benötigt werden.
This document discusses microservices architecture using Spring Cloud and related technologies. It provides an overview of microservices and cloud native applications. It then covers Spring Boot, Spring Cloud, and Netflix OSS projects that can be used to build microservices. Specific Spring Cloud features like service registration, circuit breakers, and API gateways are demonstrated. The role of Pivotal in contributing to open source projects and providing Spring Cloud services is also mentioned.
An overview of how electronic signature objects are generated and used within PDF documents including the overview of Aodbe LiveCycle ES's ability to programmatically work with them server side.
SpringBoot and Spring Cloud Service for MSAOracle Korea
Cloud 환경에서 MSA를 하기 위해서 Service Discovery, Circuit Breaker 등을 사용하여 Application을 개발하는 방법과 SpringBoot 와 Spring Cloud Service 를 사용하는데, Cloud에서 Kubernetes를 위시한 Container 생태계가 어떻게 MSA에 영향을 미치는지 알아봅니다.
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Pivotal Cloud Foundry 2.0 is a presentation about new features in Pivotal's platform as a service (PaaS) offering. Key updates include deeper integration with VMware NSX for networking and security, a new monitoring dashboard called PCF Healthwatch, support for Windows containers and .NET applications, and new services like Pivotal Container Service (PKS) for Kubernetes and Pivotal Function Service (PFS) for serverless functions. The presentation discusses how these updates help with developer productivity, operational efficiency, security, and running applications on any infrastructure as a service (IaaS).
Pivotal CloudFoundry on Google cloud platformRonak Banka
This document is a slide presentation by Ronak Banka on using Pivotal Cloud Foundry (PCF) and Google Cloud Platform (GCP) together. It discusses how PCF provides a platform for deploying applications on GCP that enables both developer and operator productivity through features like automated deployments, service integration, and operations. It also highlights benefits of using PCF on GCP like performance, scale, cost savings, and access to differentiated GCP services.
A presentation on the Netflix Cloud Architecture and NetflixOSS open source. For the All Things Open 2015 conference in Raleigh 2015/10/19. #ATO2015 #NetflixOSS
This document discusses continuous delivery using Spinnaker, an open source continuous delivery platform. It provides an overview of Spinnaker, including how it supports continuous integration and delivery goals like shipping faster and reducing risk. Spinnaker allows automated deployment pipelines across multiple cloud providers and supports features like zero-downtime deployments, rollbacks, and automated canary analysis. The document also describes how Spinnaker integrates with platforms like Cloud Foundry and CI systems like Concourse.
OSMC 2022 | Current State of icinga by Bernd ErkNETWAYS
This document provides an overview and update on the current state of Icinga, an open source monitoring solution. It discusses Icinga's goal of continuously improving its unified open source and enterprise monitoring capabilities. Key points include that Icinga is made for enterprises and offers features like scalability, high availability, and enterprise-grade support. The document highlights recent Icinga releases and upcoming work, community contributions, and how Icinga can be used to monitor infrastructure, offer automation, support cloud monitoring, and provide metrics, logs, and notifications.
Similar to Spring Boot & Spring Cloud Apps on Pivotal Application Service - Daniel Lavoie (20)
What AI Means For Your Product Strategy And What To Do About ItVMware Tanzu
The document summarizes Matthew Quinn's presentation on "What AI Means For Your Product Strategy And What To Do About It" at Denver Startup Week 2023. The presentation discusses how generative AI could impact product strategies by potentially solving problems companies have ignored or allowing competitors to create new solutions. Quinn advises product teams to evaluate their strategies and roadmaps, ensure they understand user needs, and consider how AI may change the problems being addressed. He provides examples of how AI could influence product development for apps in home organization and solar sales. Quinn concludes by urging attendees not to ignore AI's potential impacts and to have hard conversations about emerging threats and opportunities.
Make the Right Thing the Obvious Thing at Cardinal Health 2023VMware Tanzu
This document discusses the evolution of internal developer platforms and defines what they are. It provides a timeline of how technologies like infrastructure as a service, public clouds, containers and Kubernetes have shaped developer platforms. The key aspects of an internal developer platform are described as providing application-centric abstractions, service level agreements, automated processes from code to production, consolidated monitoring and feedback. The document advocates that internal platforms should make the right choices obvious and easy for developers. It also introduces Backstage as an open source solution for building internal developer portals.
Enhancing DevEx and Simplifying Operations at ScaleVMware Tanzu
Cardinal Health introduced Tanzu Application Service in 2016 and set up foundations for cloud native applications in AWS and later migrated to GCP in 2018. TAS has provided Cardinal Health with benefits like faster development of applications, zero downtime for critical applications, hosting over 5,000 application instances, quicker patching for security vulnerabilities, and savings through reduced lead times and staffing needs.
Dan Vega discussed upcoming changes and improvements in Spring including Spring Boot 3, which will have support for JDK 17, Jakarta EE 9/10, ahead-of-time compilation, improved observability with Micrometer, and Project Loom's virtual threads. Spring Boot 3.1 additions were also highlighted such as Docker Compose integration and Spring Authorization Server 1.0. Spring Boot 3.2 will focus on embracing virtual threads from Project Loom to improve scalability of web applications.
Platforms, Platform Engineering, & Platform as a ProductVMware Tanzu
This document discusses building platforms as products and reducing developer toil. It notes that platform engineering now encompasses PaaS and developer tools. A quote from Mercedes-Benz emphasizes building platforms for developers, not for the company itself. The document contrasts reactive, ticket-driven approaches with automated, self-service platforms and products. It discusses moving from considering platforms as a cost center to experts that drive business results. Finally, it provides questions to identify sources of developer toil, such as issues with workstation setup, running software locally, integration testing, committing changes, and release processes.
This document provides an overview of building cloud-ready applications in .NET. It defines what makes an application cloud-ready, discusses common issues with legacy applications, and recommends design patterns and practices to address these issues, including loose coupling, high cohesion, messaging, service discovery, API gateways, and resiliency policies. It includes code examples and links to additional resources.
Dan Vega discussed new features and capabilities in Spring Boot 3 and beyond, including support for JDK 17, Jakarta EE 9, ahead-of-time compilation, observability with Micrometer, Docker Compose integration, and initial support for Project Loom's virtual threads in Spring Boot 3.2 to improve scalability. He provided an overview of each new feature and explained how they can help Spring applications.
Spring Cloud Gateway - SpringOne Tour 2023 Charles Schwab.pdfVMware Tanzu
Spring Cloud Gateway is a gateway that provides routing, security, monitoring, and resiliency capabilities for microservices. It acts as an API gateway and sits in front of microservices, routing requests to the appropriate microservice. The gateway uses predicates and filters to route requests and modify requests and responses. It is lightweight and built on reactive principles to enable it to scale to thousands of routes.
This document appears to be from a VMware Tanzu Developer Connect presentation. It discusses Tanzu Application Platform (TAP), which provides a developer experience on Kubernetes across multiple clouds. TAP aims to unlock developer productivity, build rapid paths to production, and coordinate the work of development, security and operations teams. It offers features like pre-configured templates, integrated developer tools, centralized visibility and workload status, role-based access control, automated pipelines and built-in security. The presentation provides examples of how these capabilities improve experiences for developers, operations teams and security teams.
The document provides information about a Tanzu Developer Connect Workshop on Tanzu Application Platform. The agenda includes welcome and introductions on Tanzu Application Platform, followed by interactive hands-on workshops on the developer experience and operator experience. It will conclude with a quiz, prizes and giveaways. The document discusses challenges with developing on Kubernetes and how Tanzu Application Platform aims to improve the developer experience with features like pre-configured templates, developer tools integration, rapid iteration and centralized management.
The Tanzu Developer Connect is a hands-on workshop that dives deep into TAP. Attendees receive a hands on experience. This is a great program to leverage accounts with current TAP opportunities.
The Tanzu Developer Connect is a hands-on workshop that dives deep into TAP. Attendees receive a hands on experience. This is a great program to leverage accounts with current TAP opportunities.
Simplify and Scale Enterprise Apps in the Cloud | Dallas 2023VMware Tanzu
This document discusses simplifying and scaling enterprise Spring applications in the cloud. It provides an overview of Azure Spring Apps, which is a fully managed platform for running Spring applications on Azure. Azure Spring Apps handles infrastructure management and application lifecycle management, allowing developers to focus on code. It is jointly built, operated, and supported by Microsoft and VMware. The document demonstrates how to create an Azure Spring Apps service, create an application, and deploy code to the application using three simple commands. It also discusses features of Azure Spring Apps Enterprise, which includes additional capabilities from VMware Tanzu components.
SpringOne Tour: Deliver 15-Factor Applications on Kubernetes with Spring BootVMware Tanzu
The document discusses 15 factors for building cloud native applications with Kubernetes based on the 12 factor app methodology. It covers factors such as treating code as immutable, externalizing configuration, building stateless and disposable processes, implementing authentication and authorization securely, and monitoring applications like space probes. The presentation aims to provide an overview of the 15 factors and demonstrate how to build cloud native applications using Kubernetes based on these principles.
SpringOne Tour: The Influential Software EngineerVMware Tanzu
The document discusses the importance of culture in software projects and how to influence culture. It notes that software projects involve people and personalities, not just technology. It emphasizes that culture informs everything a company does and is very difficult to change. It provides advice on being aware of your company's culture, finding ways to inculcate good cultural values like writing high-quality code, and approaches for influencing decision makers to prioritize culture.
SpringOne Tour: Domain-Driven Design: Theory vs PracticeVMware Tanzu
This document discusses domain-driven design, clean architecture, bounded contexts, and various modeling concepts. It provides examples of an e-scooter reservation system to illustrate domain modeling techniques. Key topics covered include identifying aggregates, bounded contexts, ensuring single sources of truth, avoiding anemic domain models, and focusing on observable domain behaviors rather than implementation details.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
2. Cover w/ Image
Agenda
■ Why Spring and PAS?
■ Market Leading Spring Support
■ Services Ecosystem for Spring Apps
■ Next Steps
3. How much time
do developers
spend
developing?
Source: Forrester Business
Technographics Global
Developer Survey, 2016
Base: 719 Developers who work
for a software company, as a
game developer, for internal IT,
or in technology services
13%
18%
34%
33%
25%
24%
42%
31%
23%
26%
30%
26%
15%
17%
19%
18%
7%
7%
9%
6%
10%
5%
3%
4%
4%
None
<15 Min
15-59 Min
1-2 Hr
3-4 Hr
4+ Hr
Writing new / changing existing
code
email
miscellaneous tasks
deploying code
configuring infrastructure
4. How much time
do developers
spend
operating?
Source: Forrester
Business Technographics
Global Developer Survey,
2016
13%
14%
16%
21%
21%
24%
30%
30%
29%
32%
30%
27%
32%
28%
22%
18%
14%
10%
12%
8%
10%
7%
4%
5%
3%
None
<15 Min
15-59 Min
1-2 Hr
3-4 Hr
4+ Hr
Writing new / changing existing
code
Building or integrating code
Debugging / production support
Designing new functionality
Unit testing
5. Hardware
IaaS
Container Orchestrator
Application
Platform
Landing your workload on the right target is key to
balancing automation vs. desired flexibility required
Higher flexibility and
less enforcement of
standards
Lower development
complexity and higher
operational efficiency
Function
Platform
6. vSphere Openstack AWS
Google
Cloud
Azure &
Azure Stack
Shared Services
Shared Security
Shared Networking
Logging & Metrics / Services Brokers / API Management
Credhub / UAA / Single Sign On
VMWare NSX
Embedded Operating System (Windows / Linux)
Application Code & Frameworks
Buildpacks / Spring Boot / Spring Cloud / Steeltoe
PAS
Pivotal Application
Service
PKS
Pivotal Container
Service
PFS
Pivotal Function
Service
Pivotal Services
Marketplace
Pivotal and
Partner Products
Any App
Every Cloud
One Platform
PCF 2.3 — for
everything that matters
Concours
e
7. Pivotal Application Service (PAS) App Runtime
DYNAMIC ROUTE SERVICES / API MANAGEMENT
APP MICROSERVICES TECHNOLOGY
Spring Boot Steeltoe
Spring Cloud
Services
DATA MICROSERVICES TECHNOLOGY
Spring
Cloud Data
Flow
Cloud
Cache
RabbitMQ MySQL
YOUR APPLICATIONS
PLATFORM
Elastic Runtime Concourse
App
Autoscaler
PCF Metrics CredHub
Orgs, Spaces,
Roles and
Permissions
EMBEDDED OS
CLOUD ORCHESTRATION
CONTAINER ORCHESTRATIONWindows Linux
Amazon
Web Services
Microsoft
Azure
Google
Cloud
Platform
Open Stack VMWare
SERVICE
BROKER API
PIVOTAL
APPLICATION
SERVICE
PIVOTAL
CLOUD FOUNDRY
BOSH
MODERN
CLOUD NATIVE
PLATFORM
MULTI CLOUD
8. Eliminate Boilerplate Code, Focus on Business Logic
Spring Framework Spring
Security
Spring Data Reactor Spring Batch Spring Integration
Spring Boot
Spring Cloud
Spring Cloud Pipeilnes
10. Cloud Foundry
UAA
OAuth 2 Server for centralized ID
management
Implemented as a standard Spring MVC Webapp
Deploy Local Tomcat for testing, Cloud Foundry for
production
Support for open Auth / AuthZ standards:
● Oauth
● OpenID Connect
● SAML
● LDAP
● SCIM
11. Spring Security
and CF SSO
Cloud Foundry UAA (built-in)
Active Directory FS
Azure Active Directory
(SAML/OIDC)
CA SSO
GCP OpenID Connect
Okta
PingFederate
PingOne Cloud
Integrates to any ID Federation via (SAML/OpenID)
IDMs are self – service for DevOps via a marketplace
Converts complex SAML interactions into basic OAuth
tokens
Works great with Spring Security (Java), Steeltoe.io
(.NET)
12. CredHub
Secure credential management
Implemented as a Spring Boot app
Provides an API for storing, generating, and retrieving
credentials
Supports credentials of different types: simple strings,
passwords, certificates, keypairs, JSON objects
Supports pluggable Hardware Security Modules
13. Implementing monolith or
microservice patterns on the cloud
with Spring Boot
I. One Codebase, One App
II. Dependency Management
V. Build, Release, Run
XI. Logs
IX. Disposability
IV. Backing Services
X. Environmental Parity
XII. Administrative Process
VII. Port Binding
VI. Process
VIII. Concurrency
III. Configuration
Spring Boot makes 12+ factor
style apps easy.
Microservices requires a lot of
repetitive:
Property Configuration
Port Binding
Connecting to Backing
Services
Logging
Deployment, Redeployment
12 Factor Apps
14. Spring
Deployment
Profiles
Transition between environments
without recompiling / rewriting
Automatic enablement of “cloud” @Profile on
deploy
Any @Configuration class in this profile will be
automatically applied
No recompile required to adapt to deployment envs
https://spring.io/blog/2015/01/13/configuring-it-all-out-or-12-factor-app-style-configuration-with-spring
15. Spring Cloud
Connector for
Cloud Foundry
Bring Cloud Foundry service
connection data directly into your
Spring Beans
Auto-enabled if VCAP_APPLICATION is detected
Check for VCAP_SERVICES and parse common
data for supported services *
16. Java Buildpack
Immutable Infrastructure
for JVM frameworks
Build Containers from a single control point
Robust JRE / JVM Framework options
Self executable JAR / Java main()
Advanced JVM memory calculator
JVM heap dump histograms
Spring Boot CLI apps
Robust 3rd party framework & product support
17. Spring Cloud &
Spring Cloud
Services (SCS)
Developing on the Desktop
vs.
Deploying in Production
DEV PROD
Security: OAUTH2, TLS, PAS
UAA integration, RBAC
Ops: BOSH release for Config
Server, Service Registry, Circuit
Breaker
18. SCS:
Config Server
Zero downtime app updates –
dynamically update application
configuration
app C
greeting: hi
app B
greeting: hi
app A
greeting: hi
Config Server
2. Source config
1. Push config
1. Pull config
Hashicorp Vault
Git Source Repos
greeting: hi
2. API keys, secrets
Dev Desktop
19. SCS:
Service Registry
NetflixOSS Eureka Intelligent
Routing Foundation
Service
Registry
ConsumerProducer
1. register
2. discover
3. connect
Service
RegistryService
RegistryService
Registry
20. SCS:
Circuit Breaker
Fault Tolerance Library for
Distributed Systems
Closed
on call / pass through
call succeeds / reset count
call fails / count failure
threshold reached / trip
breaker
Half-Open
on call / pass through
call succeeds / reset
call fails / trip breaker
Open
on call / fail
on timeout / attempt reset
trip
breaker
reset
attempt
reset
trip
breaker
21. SCS:
CF CLI Plugin
Spring Cloud Services integration
for the CF Command Line
Interface
Provides SCS Dev Tools directly from CF CLI
- List apps in eureka instance
- Enable/disable Eureka registration
- Deregister service in Eureka
- Encrypt config server values
22. Spring Cloud
Pipelines
Opinionated template of a
deployment pipeline
Jumpstart your CI / CD pipeline setup!
Packaged up best practices from Pivotal
Each pipeline step is an (editable) bash script
Supports Jenkins, Concourse, Maven, Gradle
Target PAS or PKS
23. Container to
Container
Networking
Enabling direct microservice to
microservice communication
Improve on legacy CF ASG experience:
Order of magnitude latency reduction
No expensive “hairpin” trip through LB/FW
Support for multiple TCP/UDP ports
Allow SDN traffic like VMware NSX
Support for “Zero Trust” security posture
B
C
A
24. Apps Manager
Rich management and
observability of Spring Boot
applications
Transparent security integration with Pivotal Cloud
Foundry UAA, icon recognition for boot apps
/loggers to list or modify log levels at runtime
/mapping for all @RequestMapping paths
/info for env, build & Git info
/health information
/dump and /heapdump
/trace for recent HTTP requests
25. PCF Metrics
Trace Explorer:
Distributed trace call graph &
visually correlated logs
Understand failures and latency in microservice
architecture, no manual zipkin management
Your custom Spring Boot /metrics automatically
display as graphs
Interactive, graphical displays of request traffic
through an app
View correlated logs to time window
Visualize and filter metrics by AI
Integrated with PCF UAA Security
26. Container Health
& Performance
1st responder troubleshooting
tools for DevOps
Shows app developers a real-time view of data
Network metrics: HTTP req/err, and avg latency
(every second)
Container metrics: CPU, disk, and memory (every 30
seconds)
App events: create, update, start, stop, crash (on
occurrence)
27. Spring Cloud
Data Flow for
PCF
Streaming & Batch orchestration
via Cloud Native Data Pipelines
PAS & UAA Security
1. Provision for Ops
SCDF for PCF
tile
BOSH Director
2. Devs make instances
3. Write Apps!
mySQL RabbitMQ Redis
Metrics
Collector
Spring
Cloud
Skipper
CUPS
(e.g.
Kafka)
29. Pivotal Cloud Cache
● High performance, in-
memory, data at scale
for microservices
● Look-aside caches &
HTTP session state
caching
● NEW: WAN replication
MySQL for PCF RabbitMQ for PCF
● Enterprise-ready
MySQL for your
developers
● Automate database
operations in developer
workflows
● NEW: Leader-follower
for multi-site HA
● Easily connect
distributed applications
with the most widely
deployed open source
message broker
● Enable connected
scalable, distributed
applications
● NEW: On-demand
clusters
● In-Memory cache and
datastore, configured for
the enterprise
● Efficient provisioning
matched to use cases
Redis for PCF
Enterprise Ready Services
BOSH Managed | On-Demand Provisioning | Dedicated Instances | Custom Service Plans
30. The Growing PCF Ecosystem
Mobile Networking
Storage
BPM
App Integration
DevOps Tooling
Data
Management
Microservices
Management
CRM
CommerceIAMIDE/CodeOther
APM/Monitoring
Search
Security
SIEM/Log/Audit
API Gateways
Messaging
IaaS
58% of developers spend an hour or more a day writing code, but only 28% - a bit more that 1 in 4 spend 3 or more hours a day writing code.
38% spend at least an hour a day on e-mail
30% spend and hour a day deploying code! - that seems like too much to me!
29% spend and hour a day configuring infrastructure!
46% spend at least an hour debugging or dealing w/ production support issues… essentially dealing w/ technical debt.
The more you can let platforms take over responsibility for running your applications, the more free time you’ll have to add new functionality to those applications.
Whenever you build new application functionality, you should ask yourself “what do you want to manage going forward?” Do you want to have to manage an VM, a container, or just deploy a function?
The further up the stack you can keep your applications, the less overall plumbing you need to worry about maintaining in the future.
Each tool has its own purpose. We should be careful that we articulate the value each tool brings to the table and what it’s strengths and weaknesses are. We need to make sure to point out that as you move “up the stack” from Containers to Serverless, you have less control, but you have less that you are responsible for as well. Users need to make sure they’re picking the right tool for the job they need to accomplish.
Examples:
If you need to be able to run your application on a specific port, or need to co-mingle a couple applications right next to each other, then a Container Orchestrator (PKS) is a great solution.
If you have a webapp that runs without any heroic efforts to change from running on your laptop to production, then an Application Platform (PAS) is a great solution.
If you have a piece of code that needs to run when some event happens, instead of deploying an application with a very limited scope of work, consider writing that functionality to run as a Function as a Service and deploy to a Serverless Infrastructure (PFS)
PCF now includes many abstractions with shared promises striped across each runtime.
-Any app, every cloud, one platform. We offer you the right tool for the job, namely:
-PAS, a runtime for apps. This delivers the best experience for your Java, .NET, and Node.js apps.
-PKS, a runtime for containers. PKS, based on Kubernetes, is now available to select customers. Use it to run developer-built containers, and workloads like Elasticsearch and Apache Spark. Talk to your account team for access!
-PFS, a runtime for functions. This is coming next year (contact us for early access). In the meantime, check out project riff on Github; this is the open source foundation for PFS.
-Services Marketplace. Your software doesn’t live alone. You need to extend it, secure it, observe it. And you want to use the biggest names in tech to do all this. The Services Marketplace has you covered!
Best runtime for Spring and Spring Boot — Spring’s microservice patterns—and Spring Boot’s executable jars—are ready-made for PAS.
Turnkey microservices foundation — Spring Cloud Services provides a built-in Config Server, Service Registry, and Circuit Breaker Dashboard and Cloud Foundry UAA integration.
Advanced Networking — Automated DNS, HTTP/TCP routing, C2C networking, SDDN integration, Load Balancing
CI/CD Pipeline Ready — Fully integrated with continuous integration/continuous delivery (CI/CD) tools like Concourse.
Storage-ready — Run stateful apps in a modern application runtime using the Volume Services NFS v3 service broker.
Container-ready — PAS supports the OCI format for Docker images. Run platform-built and developer-built containers.
Monitoring, Metrics, Logging, Dev Tools — PCF Metrics reimagines monitoring for microservices, while loggregator and advanced CLI / GUI dev tools make apps easy.
PAS is at it’s best when used for stateless workloads, like web apps, REST APIs, etc. While it can handle NFS v3 mounts, PKS is a better choice for databases, search engines, etc.
Spring excels at these stateless workloads, making it an ideal fit.
When making everything from 12 factor to fully Cloud Native apps, Spring & PAS both work overtime to eliminate repetitive, boilerplate code so you can focus on what matters.
By contrast, Java EE servers were designed to be stateful, mutable, things that had to be kept alive due to instance – specific data.
If a Java Application Server process is now only starting a statically known set of Java code; the very idea of an application server changes drastically to be more about a way of performing dependency injection and including the modular services you need; that sounds more like a framework than what we’ve come to think of as a Java Application Server.
Learn more here from Redhat:
https://blog.fabric8.io/the-decline-of-java-application-servers-when-using-docker-containers-edbe032e1f30#.osullguxl
Running Spring requires only a JVM, but is also designed to automate working with lightweight Java webservers like Apache Tomcat, Netty and Undertow.
UAA provides identity based security for applications and APIs. It supports open standards for authentication and authorization, including the following:
Oauth, OpenID Connect, SAML, LDAP, SCIM
The major features of UAA include the following:
User Single Sign-On (SSO) using federated identity protocols
API security with OAuth
User and group management
Multi-tenancy support
Support for JWT and opaque as a token format
Token revocation
Operational flexibility (BOSH release for multicloud, or push as web app)
Database flexibility, including support for MySQL, Postgres, and SQL Server
Auditing, logging, and monitoring
Token exchange for SAML and JWT bearers
Rest APIs for authentication, authorization, and configuration management
The Single Sign-On service is an all-in-one solution for securing access to applications and APIs on PCF. The Single Sign-On service provides support for native authentication, federated single sign-on, and authorization. Operators can configure native authentication and federated single sign-on, for example SAML, to verify the identities of application users. After authentication, the Single Sign-On service uses OAuth 2.0 to secure resources or APIs.
Single Sign-On
The Single Sign-On service allows users to log in through a single sign-on service and access other applications that are hosted or protected by the service. This improves security and productivity since users do not have to log in to individual applications.
Developers are responsible for selecting the authentication method for application users. They can select native authentication provided by the User Account and Authentication (UAA) or external identity providers. UAA is an open source identity server project under the Cloud Foundry (CF) foundation that provides identity based security for applications and APIs.
SSO supports service provider-initiated authentication flow and single logout. It does not support identity provider-initiated authentication flow. All SSO communication takes place over SSL.
OAuth 2.0 Authorization
After authentication, the Single Sign-On service uses OAuth 2.0 for authorization. OAuth 2.0 is an authorization framework that delegates access to applications to access resources on behalf of a resource owner.
Developers define resources required by an application bound to a Single Sign-On (SSO) service instance and administrators grant resource permissions.
UAA provides identity based security for applications and APIs. It supports open standards for authentication and authorization, including the following:
Oauth, OpenID Connect, SAML, LDAP, SCIM
The major features of UAA include the following:
User Single Sign-On (SSO) using federated identity protocols
API security with OAuth
User and group management
Multi-tenancy support
Support for JWT and opaque as a token format
Token revocation
Operational flexibility (BOSH release for multicloud, or push as web app)
Database flexibility, including support for MySQL, Postgres, and SQL Server
Auditing, logging, and monitoring
Token exchange for SAML and JWT bearers
Rest APIs for authentication, authorization, and configuration management
Boot and Cloud Foundry together make it easy to develop 12 Factor applications.
Boot handles II, III and plays a supporting or enabling role in IV, V, VII (port binding)
Declare dependencies == boot offers 1st class Maven / Gradle with Boot starters, Starters (automatic project dependency management)
Autconfiguration for Spring Boot removes boilerplate application configuration work
Environment and @Profile = makes it simple to adapt from dev, stage to prod without recompile, automatically
Actuators = built in metrics / monitoring, integrated into PCF Apps Man console
@profile -- Segregate parts of the application configuration and make it available in certain environments via
-Annotations
-Properties file
Spring Cloud Connectors are a Spring library that exposes application information, cloud information, and discovered services as Spring beans of the appropriate type (for example, an SQL service will be exposed as a javax.sql.DataSource with optional connection pooling)
12 Factors and PCF:
One Codebase: Single code base managed in SCC; or set of repositories from a common root. Look for “the seams” in the “app” and try and break things up a bit if possible. Getting to a single codebase makes it cleaner to build and push any number of immutable releases across various environments. The best example of violating this is when your app is composed of a dozen or more code repos. Or when one code repo is used to produce a bunch of applications.
Dependency Management: The classic enterprise might rely on either “bootstrapping” (bundling all dependencies with the app binary) or use of a “mommy server” (providing everything the app needs – a server to host and all it’s dependencies). Most contemporary languages take advantage of facilities like Maven and Gradle or Nuget for .NET. Regardless of tool the idea is … allow developers to declare dependencies and let the tool ensure it’s satisfied. Just need to ensure the tool being used doesn’t package dependencies in a folder structure under the app itself.
Build, Release, Run: a single codebase taken through a build process to produce a single artifact; then merged with configuration information external to the app. This is then delivered to cloud environments and run.
Configuration: this is about externalizing your config is easy to say but can be challenging depending on how the app was built. There are various solutions – refactor your code to look for environment variables. You could use Spring Config server or other products
Logs: should be treated as event streams, that is a time-ordered sequence of events emitted from an app. You can’t log to a file in a cloud. You log to stdout / stderr and let the cloud provider or related tools deal with it
Disposability: in a cloud process is disposable – it can be destroyed and created at any time. Designing for this is important to ensure good uptime and to get the benefit of auto scaling, etc. If you have processes that take a while to start up or shut down they should be separated as a backing service and optimized to accelerate performance
Backing Services: a backing service is something your app depends on – like a database or some kind of REST service. The app should declare it needs a backing service via external config. The Cloud will bind your app to the service. And it should be possible to attach and reattach without restarting the app. This loose coupling has a LOT of advantages. It also allows you to use a circuit breaker (part of the Netflix OSS and SCS) to gracefully handle an outage scenario.
Environmental Parity: we’ve all probably worked in situations where a shared dev sandbox has a different scale and reliability profile than QA, which is also different than prod. Keeping environmental consistency is much easier in a Cloud environment like PCF. Doing this and automating as much of the SDLC as possible will help you confidently deploy smaller things more often. Spring Boot executable JAR embed
Administrative Process: these are things like timer jobs, one-off scripts and other things you might have done using a programming shell. These are fine in the monolithic world but get complicated when you scale horizontally with multiple instances of the same app trying to kickoff a job. An alternate approach might be to break apart the job into it’s own microservice with a rest end point for controlled invocation
Port Binding: in the non-cloud world it’s typical to see a bunch of apps running in the same container, separating each app by port number and then using DNS to provide a friendly name to access. In the Cloud you avoid this micro-management – the Cloud provider will manage port assignment along with routing, scaling, etc.
Process: the original 12-factor definition here says that apps must be stateless. But state needs to be somewhere! Our guidance is to move any long-running state into external, logical backing services that rely on Redis or Mongo or whatever to manage what they need.
Concurrency: PCF and other cloud platforms are built to scale horizontally. There are design considerations here – your app should be disposable, stateless and use share-nothing processes. This allows you to leverage features like auto-scale, blue-green deployment, etc.
When you deploy a Spring application to Cloud Foundry, Cloud Foundry automatically activates the cloud profile, no reboot / recompile / re-deploy required.
Things like service credentials and hostnames.
The Environment also brings the idea of profiles. It lets you ascribe labels (profiles) to groupings of beans. Use profiles to describe beans and bean graphs that change from one environment to another. You can activate one or more profiles at a time. Beans that do not have a profile assigned to them are always activated. Beans that have the profile default are activated only when there are no other profiles are active.
Profiles let you describe sets of beans that need to be created differently in one environment versus another. You might, for example, use an embedded H2 javax.sql.DataSource in your local dev profile, but then switch to a javax.sql.DataSource for PostgreSQL that’s resolved through a JNDI lookup or by reading the properties from an environment variable in Cloud Foundry when the prod profile is active. In both cases, your code works: you get a javax.sql.DataSource, but the decision about which specialized instance is used is decided by the active profile or profiles.
You should use this feature sparingly. Ideally, the object graph between one environment and another should remain fairly fixed.
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-profiles.html
https://spring.io/blog/2015/01/13/configuring-it-all-out-or-12-factor-app-style-configuration-with-spring
Spring Cloud Connectors provides a simple abstraction for JVM-based applications running on cloud platforms to discover bound services and deployment information at runtime, and provides support for registering discovered services as Spring beans.
This connector discovers services that are bound to an application running in Cloud Foundry. (Since Cloud Foundry enumerates each service in a consistent format, Spring Cloud Connectors does not care which service provider is providing it.)
https://www.openservicebrokerapi.org/ (OSB is a CNCF and CFF standard).
http://cloud.spring.io/spring-cloud-connectors/spring-cloud-cloud-foundry-connector.html
New Relic, Cassandra, DB2, MongoDB, MySQL, Oracle, PostgreSQL, RabbitMQ, Redis, SMTP, SQL Server
Self executable JAR / Java main()
Spring Boot CLI apps
Autoreconfiguration for simple, single datasource apps (prototyping)
Advanced memory calculator
JVM heap dump histograms
Many tunable runtime params
Apps pushed with this buildpack will have:
Improved JVM memory calculation, resulting in fewer app terminations
Improved JVM Out of Memory Behavior - JVM terminal failures now include useful troubleshooting data: a histogram of the heap to the logs
Memory calculator configuration is simplified, with the use of standard Java memory flags.
“Because buildpacks don't "contain" copies of binaries, they provide enterprises with a point of governance over what software is deployed with an application. Choosing to use container images as your deployable artifact means more responsibility falls on the platform operators and developers.” (3rd party software in buildpack)
“Another advantage of using buildpacks is tuning runtime parameters for the software in the buildpack stack. ”
Auto-reconfiguration
Cloud Foundry auto-reconfigures applications only if the following items are true for your application:
Only one service instance of a given service type is bound to the application. In this context, different relational databases services are considered the same service type. For example, if both a MySQL and a PostgreSQL service are bound to the application, auto-reconfiguration does not occur.
Only one bean of a matching type is in the Spring application context. For example, you can have only one bean of type javax.sql.DataSource.
With auto-reconfiguration, Cloud Foundry creates the DataSource or connection factory bean itself, using its own values for properties such as host, port, username and so on. For example, if you have a single javax.sql.DataSource bean in your application context that Cloud Foundry auto-reconfigures and binds to its own database service, Cloud Foundry does not use the username, password and driver URL you originally specified. Instead, it uses its own internal values. This is transparent to the application, which really only cares about having a `DataSource where it can write data but does not really care what the specific properties are that created the database. Also, if you have customized the configuration of a service, such as the pool size or connection properties, Cloud Foundry auto-reconfiguration ignores the customizations.
For more information about auto-reconfiguration of specific services types, see the Service-Specific Details section.
AppDynamics Agent (Configuration)
Container Customizer (Configuration)
Debug (Configuration)
Dyadic EKM Security Provider (Configuration)
Dynatrace Appmon Agent (Configuration)
Dynatrace SaaS/Managed OneAgent (Configuration)
Google Stackdriver Debugger (Configuration)
Introscope Agent (Configuration)
Java Options (Configuration)
JRebel Agent (Configuration)
JMX (Configuration)
Luna Security Provider (Configuration)
MariaDB JDBC (Configuration) (also supports MySQL)
Metric Writer (Configuration)
New Relic Agent (Configuration)
Play Framework Auto Reconfiguration (Configuration)
Play Framework JPA Plugin (Configuration)
PostgreSQL JDBC (Configuration)
ProtectApp Security Provider (Configuration)
Spring Auto Reconfiguration (Configuration)
Spring Insight
YourKit Profiler (Configuration)
Eureka, Hystrix, and Configuration servers are a critical underpinning elements to microservices architecture. Spring Cloud Services for Pivotal Cloud Foundry (PCF) packages server-side components of Spring Cloud projects, including Spring Cloud Netflix and Spring Cloud Config, and makes them available as services in the PCF Marketplace. This frees you from having to implement and maintain your own managed services in order to use the included projects. You can create a Config Server, Service Registry, or Circuit Breaker Dashboard service instance on-demand, bind to it and consume its functionality, and return to focusing on the value added by your own microservices.
Spring Cloud is great for working with Eureka, Hystrix, and Configuration servers (and much more) on the local developer desktop or in unit testing environments.
When you need to go to production – just swap out your maven / gradle dependencies for the SCS versions.
On the security side SCS offers
- End to end TLS / SSL communication enforcement for inbound and outbound requests
Full integration with PAS Org/Space permission model (RBAC) and UAA identity zones
OAUTH2 support
On the Ops side, Cloud Foundry goes way beyond automatic provisioning / de-provisioning. Since SCS is a complete BOSH release, it’s fully Cloud Foundry managed, as well as the underpinning infrastructure required for SCS to achieve it’s abilities (mySQL for PCF and RabbitMQ for PCF are required). These capabilities are unmatched by our competitors.
Lightweight daemons are the way to go in supporting microservice architecture. When dealing with stateful data, like configuration as a service, naming registries etc – having a lightweight process that is quick to boot/shutdown, and has the smallest possible scope is a big advantage. The main reason that lightweight daemons are preferable to similar capabilities that might come as part of a larger product, is that daemons boot/shutdown faster, are easier to containerize, cluster and operate. Developing up with a leader election algorithm, or data synch / change notification algorithm, is significantly easier with a server that has a small scope of function.
Config Server for Pivotal Cloud Foundry (PCF) is an externalized application configuration service, which gives you a central place to manage an application’s external properties across all environments. As an application moves through the deployment pipeline from development to test and into production, you can use Config Server to manage the configuration between environments and be certain that the application has everything it needs to run when you migrate it. Config Server easily supports labelled versions of environment-specific configurations and is accessible to a wide range of tooling for managing the content.
The concepts on both client and server map identically to the Spring Environment and PropertySource abstractions. They work very well with Spring applications, but can be applied to applications written in any language. The default implementation of the server storage backend uses Git.
Spring Boot Actuator also adds a refresh endpoint to the application. This endpoint is mapped to /refresh, and a POST request to the refresh endpoint refreshes any beans which are annotated with @RefreshScope. You can thus use @RefreshScope to refresh properties which were initialized with values provided by the Config Server.
http://docs.pivotal.io/spring-cloud-services/1-5/common/config-server/writing-client-applications.html
Service Registry for Pivotal Cloud Foundry (PCF) provides your applications with an implementation of the Service Discovery pattern, one of the key tenets of a microservice-based architecture. Trying to hand-configure each client of a service or adopt some form of access convention can be difficult and prove to be brittle in production. Instead, your applications can use the Service Registry to dynamically discover and call registered services.
When a client registers with the Service Registry, it provides metadata about itself, such as its host and port. The Registry expects a regular heartbeat message from each service instance. If an instance begins to consistently fail to send the heartbeat, the Service Registry will remove the instance from its registry.
Service Registry for Pivotal Cloud Foundry is based on Eureka, Netflix’s Service Discovery server and client.
Circuit Breaker Dashboard for Pivotal Cloud Foundry (PCF) provides Spring applications with an implementation of the Circuit Breaker pattern. Cloud-native architectures are typically composed of multiple layers of distributed services. End-user requests may comprise multiple calls to these services, and if a lower-level service fails, the failure can cascade up to the end user and spread to other dependent services. Heavy traffic to a failing service can also make it difficult to repair. Using Circuit Breaker Dashboard, you can prevent failures from cascading and provide fallback behavior until a failing service is restored to normal operation.
When applied to a service, a circuit breaker watches for failing calls to the service. If failures reach a certain threshold, it “opens” the circuit and automatically redirects calls to the specified fallback mechanism. This gives the failing service time to recover.
Circuit Breaker Dashboard for Pivotal Cloud Foundry is based on Hystrix, Netflix’s latency and fault-tolerance library.
The powerfully simple cloud foundry CLI tool makes interacting with PAS easy for developers. This plugin extends the CF CLI for SCS by adding management, and lifecycle commands.
List Applications registered with Service Registry
`cf service-registry-list service-registry`
Service Registry Info
`cf service-registry-info service-registry`
Enable/Disable Application registration on Service Registry
`cf service-registry-enable service-registry app_name`
`cf service-registry-disable service-registry app_name`
Deregister Application from Service Registry
`cf service-registry-deregister service-registry appname [-i #instance]`
Encrypt value via Config Server
`cf config-server-encrypt-value config-server mysecret`
Value generated can be used in Config Server configuration file with `{crypt}` prefix
Manage SCS service instance backing applications’ state
View status - `cf scs-status scs-si-name`
Stop - `cf scs-stop scs-si-name`
Start - `cf scs-start scs-si-name`
Restart - `cf scs-restart scs-si-name`
Restage - `cf scs-restage scs-si-name`
Every company sets up a pipeline to take code from your source control, through unit testing and integration testing, to production from scratch. Every company creates some sort of automation to deploy its applications to servers. Enough is enough - time to automate that and focus on delivering business value. Remove manual, error prone steps, enable rollback and blue/green deployments, and much more. CI / CD pipelines are a critical part of achieving the development and deployment velocity you want – in a safe land repeatable fashion.
“I have stopped counting how many times I’ve done this from scratch” - was one of the responses to the tweet about starting the project called Spring Cloud Pipelines.
ASG == Application Security Groups, the V1 implementation that predated C2C networking.
This enables microservice discovery, client LB
Description
Operators enable/disable Container to Container communication as a global policy
Developers specify which Apps and on which ports direct communication is permitted
Application traffic tagged with VXLAN group policy tag, allowing tag-based access control
Key Use Case(s)
Applications composed of many interconnected microservices
Efficient network traversal when using Spring Cloud Services, no gorouter “hairpinning”
Notes
Containers communicate using a single, system-wide routed (L3) IP network
Changing policy does not require an application restart
Policy is configured through use of a cf CLI plugin or directly via the CF Networking API
Apps Manager is a GUI that developers use to control applications and their lifecycle.
The Apps Manager UI supports several production-ready endpoints from Spring Boot Actuator, among other useful Spring Boot security integration points and auto-detection capabilities.
https://docs.pivotal.io/pivotalcf/2-0/console/using-actuators.html
Apps Manager is a web-based tool to help manage organizations, spaces, applications, services, and users. Apps Manager provides a visual interface for performing the following subset of functions available through the Cloud Foundry Command Line Interface (cf CLI):
Orgs: You can create and manage orgs.
Spaces: You can create, manage, and delete spaces.
Apps: You can scale apps, bind apps to services, manage environment variables and routes, view logs and usage information, start and stop apps, and delete apps.
Services: You can bind services to apps, unbind services from apps, choose and edit service plans, and rename and delete service instances.
Users: You can invite new users, manage user roles, and delete users.
By simply adding the Spring Cloud Sleuth distributed tracing to your application’s maven or gradle dependencies, then attach it to a binder (say, RabbitMQ). PCF Metrics helps you understand and troubleshoot the health and performance of your apps by displaying a dependency graph that traces a request as it flows through your apps and their endpoints, along with the corresponding logs.
PCF Metrics supports out-of-the-box Spring Boot Actuator metrics, custom app metrics, and instance-level metrics visualization.
Spring Cloud Sleuth is a tracer for Java / Spring. These systems are for Collecting, indexing, viewing the span/trace data. Sleuth / Zipkin aggregates all the info.
Send data from your app via logs, rabbitmq, logs, http…
Demo notes:
mySQL stores spans (single API calls, traces are the end to end)
PCF Metrics also gives you basic, “1st responder” trouble shooting tools to locate where errors may be occurring.
Container Metrics: Three graphs measuring CPU, memory, and disk usage percentages
Network Metrics: Three graphs measuring requests, HTTP errors, and response times
Custom Metrics: User-customizable graphs for measuring app performance, such as Spring Boot Actuator metrics
App Events: A graph of update, start, stop, crash, SSH, and staging failure events
Logs: A list of app logs that you can search, filter, and download
The SCDF for PCF is designed to work with SCS and of course, the RabbitMQ / mySQL, and Redis service broker technology already in the platform.
MySQL for apps, pipelines and task history
RabbitMQ for event messaging
Redis for capturing analytics data
Skipper is for CI of boot apps
CUPS is for user provided services off platform
Metrics Collector is used for throughput rates in Dashboard. It is a REST server that also listens (also a stream consumer) to a common destination (queue or topic) for data. It also performs in-memory metrics aggregation to reconstitute “stream level” throughput rates. SCDF UI hits the REST endpoints via regular polls to get the aggregated metrics to display in the dashboard.
Spring Cloud Data Flow for PCF. This PCF tile auto-provisions all the components (Data Flow server, Redis, RabbitMQ, MySQL) into a managed, cloud-native integration service on PCF.
PCF Scheduler. Extends existing support for one-off tasks with a component that initiates batch jobs on a schedule. Supports Spring Cloud Data Flow task execution. Currently a separate install
What’s at the center of every complex organization? Correction - what should be? Answer: a solid foundation! Pivotal Cloud Foundry has an architecture that allows virtually any vendor, partner, service, product, or stream both into and outside of the platform. PCF exposes simple and flexible APIs for interacting with “service brokers”, an industry standard concept that is implemented at the core of the platform. By flowing transactions and metrics through a reliable, secure, and scalable core, businesses can ensure that anything or anyone they communicate with can be managed.
https://pivotal.io/platform/services-marketplace