This is your one stop shop introduction to get oriented to the world of reactive programming. There are lots of such intros out there even manifestos. We hope this is the one where you don't get lost and it makes sense. Get a definition of what "reactive" means and why it matters. Learn about Reactive Streams and Reactive Extensions and the emerging ecosystem around them. Get a sense for what going reactive means for the programming model. See lots of hands-on demos introducing the basic concepts in composition libraries using RxJava and Reactor.
As the complexity of web and mobile apps increases, so does the importance of ensuring that your client-side resources load and execute in an optimal and efficient manner. Differences in resource loading, transforming, and fingerprinting techniques can have a dramatic impact on performance and caching. These techniques can dictate whether your users have a joyful or frustrating experience. Attend this talk to learn the SpringMVC performance techniques aimed at keeping your users happy.
A video of this presentation is available from InfoQ:
http://www.infoq.com/presentations/resource-spring-mvc-4-1
Our previous talk "Intro to Reactive Programming" defined reactive programming and provided details around key initiatives such as Reactive Streams and ReactiveX. In this talk we'll focus on where we are today with building reactive web applications. We'll take a look at the choice of runtimes, how Reactive Streams may be applied to network I/O, and what the programming model may look like. While this is a forward looking talk, we'll spend plenty of time demoing code built with with back-pressure ready libraries available today.
This talk provides a practical overview of new features for web applications in Spring Framework 4.2 including the addition of HTTP streaming, Server-Sent Events, a fine-grained model for cross-origin requests, comprehensive HTTP caching updates, and more. There are also plenty of updates for WebSocket-style messaging which this talk will cover.
The many benefits of a RESTful architecture has made it the standard way in which to design web based APIs. For example, the principles of REST state that we should leverage standard HTTP verbs which helps to keep our APIs simple. Server components that are considered RESTFul should be stateless which help to ensure that they can easily scale. We can leverage caching to gain further performance and scalability benefits.
However, the best practices of REST and security often seem to clash. How should a user be authenticated in a stateless application? How can a secured resource also support caching? Securing RESTful endpoints is further complicated by the the fact that security best practices evolve so rapidly.
In this talk Rob will discuss how to properly secure your RESTful endpoints. Along the way we will explore some common pitfalls when applying security to RESTful APIs. Finally, we will see how the new features in Spring Security can greatly simplify securing your RESTful APIs.
This is your one stop shop introduction to get oriented to the world of reactive programming. There are lots of such intros out there even manifestos. We hope this is the one where you don't get lost and it makes sense. Get a definition of what "reactive" means and why it matters. Learn about Reactive Streams and Reactive Extensions and the emerging ecosystem around them. Get a sense for what going reactive means for the programming model. See lots of hands-on demos introducing the basic concepts in composition libraries using RxJava and Reactor.
Migrating to Angular 5 for Spring DevelopersGunnar Hillert
You have the goal to migrate your project from AngularJS 1.x to Angular 4 and Angular 5. This should be straightforward, except you are realizing that your 3 year old technology stack is totally outdated (Grunt, RequireJS, Bower et al). Furthermore, you are using an older AngularJS 1.x version and your architecture does not conform with the latest 1.x architectural recommendations. At this point things start to look daunting. In this talk we discuss the challenges, experiences and reasons for migrating the Spring Cloud Data Flow Dashboard from using AngularJS 1.x to Angular 5. We also show how we effectively integrate our Angular front-end with Spring Boot.
As the complexity of web and mobile apps increases, so does the importance of ensuring that your client-side resources load and execute in an optimal and efficient manner. Differences in resource loading, transforming, and fingerprinting techniques can have a dramatic impact on performance and caching. These techniques can dictate whether your users have a joyful or frustrating experience. Attend this talk to learn the SpringMVC performance techniques aimed at keeping your users happy.
A video of this presentation is available from InfoQ:
http://www.infoq.com/presentations/resource-spring-mvc-4-1
Our previous talk "Intro to Reactive Programming" defined reactive programming and provided details around key initiatives such as Reactive Streams and ReactiveX. In this talk we'll focus on where we are today with building reactive web applications. We'll take a look at the choice of runtimes, how Reactive Streams may be applied to network I/O, and what the programming model may look like. While this is a forward looking talk, we'll spend plenty of time demoing code built with with back-pressure ready libraries available today.
This talk provides a practical overview of new features for web applications in Spring Framework 4.2 including the addition of HTTP streaming, Server-Sent Events, a fine-grained model for cross-origin requests, comprehensive HTTP caching updates, and more. There are also plenty of updates for WebSocket-style messaging which this talk will cover.
The many benefits of a RESTful architecture has made it the standard way in which to design web based APIs. For example, the principles of REST state that we should leverage standard HTTP verbs which helps to keep our APIs simple. Server components that are considered RESTFul should be stateless which help to ensure that they can easily scale. We can leverage caching to gain further performance and scalability benefits.
However, the best practices of REST and security often seem to clash. How should a user be authenticated in a stateless application? How can a secured resource also support caching? Securing RESTful endpoints is further complicated by the the fact that security best practices evolve so rapidly.
In this talk Rob will discuss how to properly secure your RESTful endpoints. Along the way we will explore some common pitfalls when applying security to RESTful APIs. Finally, we will see how the new features in Spring Security can greatly simplify securing your RESTful APIs.
This is your one stop shop introduction to get oriented to the world of reactive programming. There are lots of such intros out there even manifestos. We hope this is the one where you don't get lost and it makes sense. Get a definition of what "reactive" means and why it matters. Learn about Reactive Streams and Reactive Extensions and the emerging ecosystem around them. Get a sense for what going reactive means for the programming model. See lots of hands-on demos introducing the basic concepts in composition libraries using RxJava and Reactor.
Migrating to Angular 5 for Spring DevelopersGunnar Hillert
You have the goal to migrate your project from AngularJS 1.x to Angular 4 and Angular 5. This should be straightforward, except you are realizing that your 3 year old technology stack is totally outdated (Grunt, RequireJS, Bower et al). Furthermore, you are using an older AngularJS 1.x version and your architecture does not conform with the latest 1.x architectural recommendations. At this point things start to look daunting. In this talk we discuss the challenges, experiences and reasons for migrating the Spring Cloud Data Flow Dashboard from using AngularJS 1.x to Angular 5. We also show how we effectively integrate our Angular front-end with Spring Boot.
High Performance Cloud Native APIs Using Apache Geode VMware Tanzu
SpringOne Platform 2017
Anna Jung, HCSC; Paul Vermeulen, Pivotal
"Traditionally cloud native APIs contain the logic to convert data from repositories into information. As the dataset grows it is difficult to scale traditional databases to meet increasing transaction volume. Apache Geode provides high speed, zero downtime data access that allows you to build fast, highly available APIs.
In this session, Anna and Paul will cover how to seamlessly integrate Apache Geode's high performance functions with cloud native APIs. In addition, they will showcase how to test drive the development of Apache Geode backed solutions (Test Driven Development)."
Migrating to Angular 4 for Spring Developers VMware Tanzu
SpringOne Platform 2017
Gunnar Hillert, Pivotal
You have the goal to migrate your project from AngularJS 1.x to Angular 4. This should be straightforward, except you are realizing that your 3 year old technology stack is totally outdated (Grunt, RequireJS, Bower et al). Furthermore, you are using an older AngularJS 1.x version and your architecture does not conform with the latest 1.x architectural recommendations. At this point things start to look daunting. In this talk we discuss the challenges, experiences and reasons for migrating the Spring Cloud Data Flow Dashboard from using AngularJS 1.x to Angular 4. We also show how we effectively integrate our Angular front-end with Spring Boot.
Under the Hood of Reactive Data Access (1/2)VMware Tanzu
SpringOne Platform 2017
Christoph Strobl, Pivotal; Mark Paluch, Pivotal
"A huge theme in Spring Framework 5.0 and its ecosystem projects is the native reactive support that empowers you to build end-to-end reactive applications. Reactive data access especially requires a reactive infrastructure. But how is this one different from the ones used before? How does it deal with I/O?
In this session, we will demystify what happens inside the driver and give you a better understanding of their capabilities. You will learn about the inner mechanics of reactive data access by walking through reactive drivers that are used in Spring Data."
Expect the unexpected: Anticipate and prepare for failures in microservices b...Bhakti Mehta
This session covers best practices for building resilient, stable RESTful services that can survive failures in distributed environments, such as transient impulses, random load, stress, or failures from some dependent components. It focuses on various techniques such as circuit breakers, bulkheads, and fail fast to ensure that services stay up and keep running despite failures.
Refactor your Java EE application using Microservices and Containers - Arun G...Codemotion
Codemotion Rome 2015 - This talk will provide a quick introduction to Docker images (build time), containers (run time), and registry (distribution). It shows how to take an existing Java EE application and package it as a monolithic application as a single Docker image. The application will then be refactored in to multiple microservices and assembled together using orchestration. Unit and integration testing of such applications will be discussed and shown as well. Design patterns and anti-patterns that show how to create cluster of such applications will be demonstrated and discussed.
Down-to-Earth Microservices with Java EEReza Rahman
Microservices have become the new kid of the buzzword block in our ever colorful industry. In this session we will explore what microservices really mean within the relatively well established context of distributed computing/SOA, when they make sense and how to develop them using the lightweight, simple, productive Java EE programming model.
We'll explore microservices using a simple but representative example using Java EE. You'll see how the Java EE programming model and APIs like JAX-RS, WebSocket, JSON-P, JSON-B, Bean Validation, CDI, JPA, EJB 3, JMS 2 and JTA aligns with the concept of microservices.
It may or may not surprise you to learn in the end that you already know more about microservices than you realize and that it is an architectural style that does not really require you to learn an entirely new tool set beyond the ones you already have. You might even see that Java EE is a particularly powerful and elegant tool set for developing microservices.
Developing rich multimedia applications with Kurento: a tutorial for Java Dev...Luis Lopez
This presentation contains a tutorial devoted to showing how Java developers can create rich multimedia applications with Kurento. Java developers will find natural Kurento development model, which is based on standard Java EE technologies and is inspired on the WWW Servlet model.
If you have ever developed a Web application, you may be familiar with this scheme. At the browser, HTML and JavaScript code is in charge of user interaction and generates HTTP requests to the server. This code is usually programmed with the help of APIs such as jQuery, DOM, XHR or others. Upon reception, HTTP requests are processed by some kind of server side technology (e. g.. PHP, Java, Ruby, etc.) using service APIs providing features such as DB access, communications, transactions, XML parsing, and others. As a result, an HTTP response is issued and sent back to the client. Following this scheme, both server and client side APIs are just capabilities simplifying developer work and providing abstractions for programming faster and more efficiently.
Kurento technologies adapt to the Web development model so that, from a programmer perspective, Kurento can be seen just as an additional set of APIs. Developers does not need to learn novel programming schemes and can reuse all their knowledge and previous background on WWW application development. When you need multimedia, just use Kurento APIs. For the rest, use your preferred APIs or reuse previous code. Kurento APIs have been designed for simplicity and Web developers will find them familiar and intuitive. Most of the low level details related to codecs, formats, protocols, profiles and containers are abstracted by the framework. Programmers just concentrate on specifying the sequence of processing steps that they want to execute on the media flows.
HTTP/2 comes to Java. What Servlet 4.0 means to you. DevNexus 2015Edward Burns
It’s hard to overstate how much has changed in the world since HTTP 1.1 went final in June of 1999. There were no smartphones, Google had not yet IPO’d, Java Swing was less than a year old… you get the idea. Yet for all that change, HTTP remains at version 1.1.
Change is finally coming. HTTP 2.0 should be complete by 2015, and with that comes the need for a new version of Servlet. It will embrace HTTP 2.0 and expose its key features to Java EE 8 applications. This session gives a peek into the progress of the Servlet spec and shares some ideas about how developers can take advantage of this exciting
update to the world’s most successful application protocol on the world’s most popular programming language.
Reactive Java EE - Let Me Count the Ways!Reza Rahman
As our industry matures there are pockets of increased demand for high-throughput, low-latency systems heavily utilizing event-driven programming and asynchronous processing. This trend is gradually converging on the somewhat well established but so-far not well understood term "Reactive".
This session explores how vanilla Java SE and Java EE aligns with this movement via features and APIs like JMS, MDB, EJB @Asynchronous, JAX-RS/Servlet/WebSocket async, CDI events, Java EE concurrency utilities and so on. We will also see how these robust facilities can be made digestible even in the most complex cases for mere mortal developers through Java SE 8 Lambdas and Completable Futures.
WebRTC infrastructures in the large (with experiences on real cloud deployments)Luis Lopez
WebRTC technologies are currently showing their potential for providing peer-to-peer real-time communications in a seamless and scalable way. However, most relevant use cases demanded by users require further features such as group communications, media recording and media interoperability. Providing them requires the presence of WebRTC media infrastructures that are sometimes complex to manage and to scale.
In this talk, we present the experiences of the Kurento.org team creating auto-scalable WebRTC infrastructures in the large. Following results generated by the NUBOMEDIA and FIWARE research projects, we introduce stateless and stateful scalability models, which provide different scalability definitions and properties. Stateless models are suitable services requiring large number of WebRTC sessions with few participants each. Such models are commonly deployed today and they are compatible with current state-of-the-art on RTP topologies (e.g. following SFU or MCU architectures). On the other hand, stateful models are capable of scaling to very large sessions (with thousands or hundred of thousands of participants) but require new types of RTP topologies beyond plain SFU and MCU models.
During the talk, we also show how to deploy such stateful and stateless infrastructures on top of IaaS clouds such as Amazon or OpenStack so that their scalability can be automatically managed. We also present the different KPIs that auto-scaling algorithms may use as well as our experiences on the accuracy and appropriateness of them. To conclude, we introduce some real-word problems on such deployments related to infrastructure monitoring and instrumentation, fault-tolerance and fault resilience mechanism and security issues.
Cloud-Native Streaming and Event-Driven MicroservicesVMware Tanzu
MARIUS BOGOEVICI SPRING CLOUD STREAM LEAD
Join us for an introduction to Spring Cloud Stream, a framework for creating event-driven microservices that builds on on the ease of development and execution of Spring Boot, the cloud-native capabilities of Spring Cloud, and the message-driven programming model of Spring Integration. See how Spring Cloud Stream’s abstractions and opinionated primitives allow you to easily build applications that can interchangeably use RabbitMQ, Kafka or Google PubSub without changing the application logic. Finally, we will show how these applications can be orchestrated and deployed on different modern runtimes such as Cloud Foundry, Kubernetes or Mesos using Spring Cloud Data Flow.
High Performance Cloud Native APIs Using Apache Geode VMware Tanzu
SpringOne Platform 2017
Anna Jung, HCSC; Paul Vermeulen, Pivotal
"Traditionally cloud native APIs contain the logic to convert data from repositories into information. As the dataset grows it is difficult to scale traditional databases to meet increasing transaction volume. Apache Geode provides high speed, zero downtime data access that allows you to build fast, highly available APIs.
In this session, Anna and Paul will cover how to seamlessly integrate Apache Geode's high performance functions with cloud native APIs. In addition, they will showcase how to test drive the development of Apache Geode backed solutions (Test Driven Development)."
Migrating to Angular 4 for Spring Developers VMware Tanzu
SpringOne Platform 2017
Gunnar Hillert, Pivotal
You have the goal to migrate your project from AngularJS 1.x to Angular 4. This should be straightforward, except you are realizing that your 3 year old technology stack is totally outdated (Grunt, RequireJS, Bower et al). Furthermore, you are using an older AngularJS 1.x version and your architecture does not conform with the latest 1.x architectural recommendations. At this point things start to look daunting. In this talk we discuss the challenges, experiences and reasons for migrating the Spring Cloud Data Flow Dashboard from using AngularJS 1.x to Angular 4. We also show how we effectively integrate our Angular front-end with Spring Boot.
Under the Hood of Reactive Data Access (1/2)VMware Tanzu
SpringOne Platform 2017
Christoph Strobl, Pivotal; Mark Paluch, Pivotal
"A huge theme in Spring Framework 5.0 and its ecosystem projects is the native reactive support that empowers you to build end-to-end reactive applications. Reactive data access especially requires a reactive infrastructure. But how is this one different from the ones used before? How does it deal with I/O?
In this session, we will demystify what happens inside the driver and give you a better understanding of their capabilities. You will learn about the inner mechanics of reactive data access by walking through reactive drivers that are used in Spring Data."
Expect the unexpected: Anticipate and prepare for failures in microservices b...Bhakti Mehta
This session covers best practices for building resilient, stable RESTful services that can survive failures in distributed environments, such as transient impulses, random load, stress, or failures from some dependent components. It focuses on various techniques such as circuit breakers, bulkheads, and fail fast to ensure that services stay up and keep running despite failures.
Refactor your Java EE application using Microservices and Containers - Arun G...Codemotion
Codemotion Rome 2015 - This talk will provide a quick introduction to Docker images (build time), containers (run time), and registry (distribution). It shows how to take an existing Java EE application and package it as a monolithic application as a single Docker image. The application will then be refactored in to multiple microservices and assembled together using orchestration. Unit and integration testing of such applications will be discussed and shown as well. Design patterns and anti-patterns that show how to create cluster of such applications will be demonstrated and discussed.
Down-to-Earth Microservices with Java EEReza Rahman
Microservices have become the new kid of the buzzword block in our ever colorful industry. In this session we will explore what microservices really mean within the relatively well established context of distributed computing/SOA, when they make sense and how to develop them using the lightweight, simple, productive Java EE programming model.
We'll explore microservices using a simple but representative example using Java EE. You'll see how the Java EE programming model and APIs like JAX-RS, WebSocket, JSON-P, JSON-B, Bean Validation, CDI, JPA, EJB 3, JMS 2 and JTA aligns with the concept of microservices.
It may or may not surprise you to learn in the end that you already know more about microservices than you realize and that it is an architectural style that does not really require you to learn an entirely new tool set beyond the ones you already have. You might even see that Java EE is a particularly powerful and elegant tool set for developing microservices.
Developing rich multimedia applications with Kurento: a tutorial for Java Dev...Luis Lopez
This presentation contains a tutorial devoted to showing how Java developers can create rich multimedia applications with Kurento. Java developers will find natural Kurento development model, which is based on standard Java EE technologies and is inspired on the WWW Servlet model.
If you have ever developed a Web application, you may be familiar with this scheme. At the browser, HTML and JavaScript code is in charge of user interaction and generates HTTP requests to the server. This code is usually programmed with the help of APIs such as jQuery, DOM, XHR or others. Upon reception, HTTP requests are processed by some kind of server side technology (e. g.. PHP, Java, Ruby, etc.) using service APIs providing features such as DB access, communications, transactions, XML parsing, and others. As a result, an HTTP response is issued and sent back to the client. Following this scheme, both server and client side APIs are just capabilities simplifying developer work and providing abstractions for programming faster and more efficiently.
Kurento technologies adapt to the Web development model so that, from a programmer perspective, Kurento can be seen just as an additional set of APIs. Developers does not need to learn novel programming schemes and can reuse all their knowledge and previous background on WWW application development. When you need multimedia, just use Kurento APIs. For the rest, use your preferred APIs or reuse previous code. Kurento APIs have been designed for simplicity and Web developers will find them familiar and intuitive. Most of the low level details related to codecs, formats, protocols, profiles and containers are abstracted by the framework. Programmers just concentrate on specifying the sequence of processing steps that they want to execute on the media flows.
HTTP/2 comes to Java. What Servlet 4.0 means to you. DevNexus 2015Edward Burns
It’s hard to overstate how much has changed in the world since HTTP 1.1 went final in June of 1999. There were no smartphones, Google had not yet IPO’d, Java Swing was less than a year old… you get the idea. Yet for all that change, HTTP remains at version 1.1.
Change is finally coming. HTTP 2.0 should be complete by 2015, and with that comes the need for a new version of Servlet. It will embrace HTTP 2.0 and expose its key features to Java EE 8 applications. This session gives a peek into the progress of the Servlet spec and shares some ideas about how developers can take advantage of this exciting
update to the world’s most successful application protocol on the world’s most popular programming language.
Reactive Java EE - Let Me Count the Ways!Reza Rahman
As our industry matures there are pockets of increased demand for high-throughput, low-latency systems heavily utilizing event-driven programming and asynchronous processing. This trend is gradually converging on the somewhat well established but so-far not well understood term "Reactive".
This session explores how vanilla Java SE and Java EE aligns with this movement via features and APIs like JMS, MDB, EJB @Asynchronous, JAX-RS/Servlet/WebSocket async, CDI events, Java EE concurrency utilities and so on. We will also see how these robust facilities can be made digestible even in the most complex cases for mere mortal developers through Java SE 8 Lambdas and Completable Futures.
WebRTC infrastructures in the large (with experiences on real cloud deployments)Luis Lopez
WebRTC technologies are currently showing their potential for providing peer-to-peer real-time communications in a seamless and scalable way. However, most relevant use cases demanded by users require further features such as group communications, media recording and media interoperability. Providing them requires the presence of WebRTC media infrastructures that are sometimes complex to manage and to scale.
In this talk, we present the experiences of the Kurento.org team creating auto-scalable WebRTC infrastructures in the large. Following results generated by the NUBOMEDIA and FIWARE research projects, we introduce stateless and stateful scalability models, which provide different scalability definitions and properties. Stateless models are suitable services requiring large number of WebRTC sessions with few participants each. Such models are commonly deployed today and they are compatible with current state-of-the-art on RTP topologies (e.g. following SFU or MCU architectures). On the other hand, stateful models are capable of scaling to very large sessions (with thousands or hundred of thousands of participants) but require new types of RTP topologies beyond plain SFU and MCU models.
During the talk, we also show how to deploy such stateful and stateless infrastructures on top of IaaS clouds such as Amazon or OpenStack so that their scalability can be automatically managed. We also present the different KPIs that auto-scaling algorithms may use as well as our experiences on the accuracy and appropriateness of them. To conclude, we introduce some real-word problems on such deployments related to infrastructure monitoring and instrumentation, fault-tolerance and fault resilience mechanism and security issues.
Cloud-Native Streaming and Event-Driven MicroservicesVMware Tanzu
MARIUS BOGOEVICI SPRING CLOUD STREAM LEAD
Join us for an introduction to Spring Cloud Stream, a framework for creating event-driven microservices that builds on on the ease of development and execution of Spring Boot, the cloud-native capabilities of Spring Cloud, and the message-driven programming model of Spring Integration. See how Spring Cloud Stream’s abstractions and opinionated primitives allow you to easily build applications that can interchangeably use RabbitMQ, Kafka or Google PubSub without changing the application logic. Finally, we will show how these applications can be orchestrated and deployed on different modern runtimes such as Cloud Foundry, Kubernetes or Mesos using Spring Cloud Data Flow.
SpringFramework 5에서 선보이는 Reactive와 같은 핵심기능이 2017 2017년 12월 샌프란시스코에서 열린 Spring One Platform행사에서 소개된 내용중 Spring Data, Spring Security, Spring WebFlux프로젝트에 녹아져 있는지 살펴봅니다. 또한 이러한 기능들이 어떻게 여러분의 시스템의 반응성을 높이고 효율적으로 동작하게 하는지 알아봅니다.
SpringOne Platform 2017
Miranda LeBlanc, Liberty Mutual
For early adopters, CI/CD and DevOps are obvious choices for driving software innovation at lightning speed, but how do you go about motivating the entire IT organization? At Liberty Mutual Insurance, we've been on a DevOps, Agile and CI/CD journey for at least the last 10 years. Come hear about how we've organically grown a culture supporting CI/CD practices and what our current struggles are in transforming 100 year old insurance company to run like a start up.
IO State In Distributed API ArchitectureOwen Rubel
The API pattern bind IO functionality to business functionality by binding IO state either through annotation (ie JAX) or by extending a RestfulController. As a result, the data associated IO State cannot be shared with the architectural instances because it is bound to the controller. This creates architectural cross cutting concerns not only with the functionality but also with the data. By abstracting the functionality, we can create a versioned data object for IO state that can be shared,cached,synced,reloaded on the fly for all architectural instances without having to restart any instance. This greatly improve automation, performance and flow of api applications and architecture.
Documenting RESTful APIs with Spring REST Docs VMware Tanzu
SpringOne Platform 2017
Jennifer Strater, Zenjob
"RESTful APIs are eating the world, yet all too often the documentation can cause indigestion for the APIs' developers and their users. Developers have to deal with annotation overload, repetition, and an unpleasant writing environment. Users are then left with documentation that's inaccurate and difficult to use. It doesn't have to be this way.
This talk will introduce Spring REST Docs and its test-driven approach to RESTful API documentation. We'll look at how it combines the power of Asciidoctor and your integration tests to produce documentation that's accurate and easy-to-read, while keeping your code DRY and free from annotation overload. We'll look at features that are new in Spring REST Docs, focusing on support for documenting APIs that have been implemented using Spring Framework 5's WebFlux."
SpringOne Platform 2017
Ryan Baxter, Pivotal
You have heard and seen great things about Spring Cloud and you decide it is time to dive in and try it out yourself. You fire up your browser head to Google and land on the Spring Cloud homepage. Then it hits you, where do you begin? What do each of these projects do? Do you need to use all of them or can you be selective? The number of projects under the Spring Cloud umbrella has grown immensely over the past couple of years and if you are a newcomer to the Spring Cloud ecosystem it can be quite daunting to sift through the projects to find what you need. By the end of this talk you will leave with a solid understanding of the Spring Cloud projects, how to use them to build cloud native apps, and the confidence to get started!
Cloud Configuration Ecosystem at IntuitVMware Tanzu
SpringOne Platform 2017
Marcello de Sales, Intuit
"Configuration management at Intuit has been reshaped over the last 18 months since the adoption of Spring Cloud Config Server. This work represents a breakthrough in configuration management practices that are changing how Intuit implements configuration management since the company’s inception over 20+ years ago. In essence, any application ranging from desktop and service monoliths started their migration to the cloud without breaking their own DNA: configuration was still part of the binary built on Continuous Integration to be deployed in different data centers. As a consequence, we were still facing the same old challenges: what happens when a new configuration change is required for the entire fleet on multiple private data centers and the cloud? The new answer lies in the adoption of the Spring Cloud Config Server as our One Intuit Configuration Service using the SaaS model, which represents a new shift from manual Operational changes to the simple Pull Requests on related Github Enterprise repositories.
Needless to say, ranging from small internal services to the giants of TurboTax and Quickbooks that are used by millions of users worldwide, there are amazing results with the adoption of this Configuration practice and service such as the decreased time to change configuration from hours to minutes without involving Operations team while getting consistent configuration across a fleet of services. On the other hand, the strong adoption rate brought up a set of new challenges for us to support this new approach in the Enterprise: how to properly architect Spring Cloud Config to be deployed as a SaaS application in the Enterprise? how can we guarantee that users are pushing valid configuration properties to their repo? How can we help them debug their properties consistently, but without relying solely on Github Pull Requests? Finally, what if we need to replicate this solution for Mobile clients? Do we need to deploy hundreds of Configuration servers in the Cloud, and consequently, take the bite on cost?
Overall, the solutions to the questions above are comprised of SaaS deployment of the Spring Cloud Config with some enterprise tweaks for security and performance. Then, we have created a Github Pre-receive hook called Spring Cloud Config Validator to validate user’s config repositories and a web application called Spring Cloud Config Inspector that helps users debug their config keys as associated values, secrets, etc. Lastly, our Spring Cloud Config Publisher solution allows users to use their applications to console a subset of their config properties from an Amazon S3 bucket that the publisher will be publishing to at every new valid commit.
Building a Secure App with Google Polymer and Java / Springsdeeg
Polymer is the latest web framework out of Google. Designed completely around the emerging Web Components standards, it has the lofty goal of making it easy to build apps based on these low level primitives. Along with Polymer comes a new set of Elements (buttons, dialog boxes and such) based on the ideas of "Material Design". These technologies together make it easy to build responsive, componentized "Single Page" web applications that work for browsers on PCs or mobile devices. But what about the backend, and how do we make these apps secure? In this talk Scott Deeg will take you through an introduction to Polmyer and its related technologies, and then through the build out of a full blown cloud based app with a secure, ReSTful backend based on Spring ReST, Spring Cloud, and Spring Security and using Thymeleaf for backend rendering jobs. At the end he will show the principles applied in a tool he's currently building. The talk will be mainly code walk through and demo, and assumes familiarity with Java/Spring and JavaScript.
12 Factor, or Cloud Native Apps – What EXACTLY Does that Mean for Spring Deve...cornelia davis
Talk given at SpringOne 2015
The third platform, characterized by a fluid infrastructure where virtualized servers come into and out of existence, and workloads are constantly being moved about and scaled up and down to meet variable demand, calls for new design patterns, processes and even culture. One of the most well known descriptions of these new paradigms is the Twelve Factor App (12factor.net), which describes elements of cloud native applications. Many of these needs are squarely met through the Spring Framework, others require support from other systems. In this session we will examine each of the twelve factors and present how Spring, and platforms such as Cloud Foundry satisfy them, and in some cases we’ll even suggest that responsibility should shift from Spring to platforms. At the conclusion you will understand what is needed for cloud-native applications, why and how to deliver on those requirements.
Designing, Implementing, and Using Reactive APIsVMware Tanzu
SpringOne Platform 2017
Paul Harris, Pivotal; Ben Hale, Pivotal
The Java community is on the cusp of a major change in programming model. As the industry moves towards high-performance micro-service architectures, the need for a reactive programming model becomes clear. In this session, the lead developers of the Cloud Foundry Java Client will talk about what led them to choose a reactive API. Using that project as a lens, they'll explore how they designed and implemented this API using Project Reactor and what users will expect when using a reactive API. If you are a developer looking to provide reactive APIs, this is your chance to gain the experience of team building a large, production-ready reactive library.
Lattice: A Cloud-Native Platform for Your Spring ApplicationsMatt Stine
As presented at SpringOne2GX 2015 in Washington, DC.
Lattice is a cloud-native application platform that enables you to run your applications in containers like Docker, on your local machine via Vagrant. Lattice includes features like:
Cluster scheduling
HTTP load balancing
Log aggregation
Health management
Lattice does this by packaging a subset of the components found in the Cloud Foundry elastic runtime. The result is an open, single-tenant environment suitable for rapid application development, similar to Kubernetes and Mesos Applications developed using Lattice should migrate unchanged to full Cloud Foundry deployments.
Lattice can be used by Spring developers to spin up powerful micro-cloud environments on their desktops, and can be useful for developing and testing cloud-native application architectures. Lattice already has deep integration with Spring Cloud and Spring XD, and you’ll have the opportunity to see deep dives into both at this year’s SpringOne 2GX. This session will introduce the basics:
Installing Lattice
Lattice’s Architecture
How Lattice Differs from Cloud Foundry
How to Package and Run Your Spring Apps on Lattice
SpringOne Platform 2016
Speakers: Kevin Hoffman; Advisory Solutions Architect, Pivotal & Chris Umbel; Advisory Architect, Pivotal
With the advent of ASP.NET Core, developers can now build cross-platform microservices in .NET. We can build services on the Mac, Windows, or Linux and deploy anywhere--most importantly to the cloud.
In this session we'll talk about Cloud Native .NET, building .NET microservices, and deploying them to the cloud. We'll build services that participate in a robust ecosystem by consuming OSS servers such as Spring Cloud Configuration Server and Eureka. We'll also show how these .NET microservices can take advantage of circuit breakers and be automatically deployed to the cloud via CI/CD pipelines.
12 Factor, or Cloud Native Apps - What EXACTLY Does that Mean for Spring Deve...VMware Tanzu
SpringOne Platform 2016
Speaker: Thomas Gamble; Director, Development, Home Depot
Your team is excited about getting started with Spring Boot and Cloud Native, but you're not entirely sure you're ready to have the team continuously delivering to prod using cf push from their local desktops. The freedom of cloud native development can be very empowering for developers, but it shouldn't be something that terrifies the operations and security teams. We'll discuss how you can setup a fast and reliable deployment process, as well as some interesting things to thing about in the future. One of the most well known descriptions of these new paradigms is the Twelve Factor App (12factor.net), which describes elements of cloud native applications. Many of these needs are squarely met through the Spring Framework, others require support from other systems. In this session we will examine each of the twelve factors and present how Spring, and platforms such as Cloud Foundry satisfy them, and in some cases we’ll even suggest that responsibility should shift from Spring to platforms. At the conclusion you will understand what is needed for cloud‐native applications, why and how to deliver on those requirements.
Developing Real-Time Data Pipelines with Apache KafkaJoe Stein
Developing Real-Time Data Pipelines with Apache Kafka http://kafka.apache.org/ is an introduction for developers about why and how to use Apache Kafka. Apache Kafka is a publish-subscribe messaging system rethought of as a distributed commit log. Kafka is designed to allow a single cluster to serve as the central data backbone. A single Kafka broker can handle hundreds of megabytes of reads and writes per second from thousands of clients. It can be elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of coordinated consumers. Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages. For the Spring user, Spring Integration Kafka and Spring XD provide integration with Apache Kafka.
Fast 5 Things You Can Do Now to Get Ready for the CloudVMware Tanzu
SpringOne Platform 2019
Fast 5 Things You Can Do Now to Get Ready for the Cloud
Speaker: Robert Sirchia, Practice Lead, Magenic Technologies
YouTube: https://youtu.be/WLw82cV0Lwk
Building Highly Scalable Spring Applications using In-Memory Data GridsJohn Blum
Slides for Luke Shannon and I's presentation at SpringOne2GX-2015 in Washingon D.C. on Tuesday, September 15th from 10:30 am to 12:00 PM EDT.
Session details @ https://2015.event.springone2gx.com/schedule/sessions/building_highly_scalable_spring_applications_with_in_memory_distributed_data_grids.html.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.