THEFT-PROOF JAVA EE - SECURING YOUR JAVA EE APPLICATIONSMarkus Eisele
Security in applications is a never-ending story. Most of the knowledge about how to build secure applications is derived from knowledge and experience. And we've all done the same mistakes every Java EE developer does over and over again. But how to solve the real business requirements behind access and authorization with Java EE? Can I have a 15k rights matrix? Does that perform? How to secure the transport layer? How does session binding works? Can I implement 2-Factor-Authentication? And what about social integrations? This talk outlines the key capabilities of the Java EE platform and introduces the audience to additional frameworks and concepts which do help by implementing all kinds of security requirements in Java EE based applications.
Generic Objects - Bill Wei - ManageIQ Design Summit 2016ManageIQ
Generic objects allow for defining and creating new types of objects in ManageIQ that are not officially supported. This provides flexibility to model additional resources. Generic objects have attributes and relationships that are defined via generic object definitions. They can be accessed throughout ManageIQ and used in the UI, automation, and REST APIs. Current development focuses on CRUD operations for definitions and objects, with future work including improved querying, relationships, and other capabilities.
IPaaS 2.0: Fuse Integration Services (Robert Davies & Keith Babo)Red Hat Developers
Red Hat JBoss Fuse integration services delivers cloud-based integration based on OpenShift by Red Hat to deliver continuous delivery of tested, production-ready integration solutions. Utilizing a drag and drop, code-free UI and combining that with the integration power of Apache Camel, Fuse integration services is the next generation iPaaS. In this session, we'll walk you through why iPaaS is important, the current Fuse integration services roadmap, and the innovation happening in open source community projects to make this a reality.
Testing Event Driven Architectures: How to Broker the Complexity | Frank Kilc...HostedbyConfluent
This document discusses testing event-driven architectures. It begins by defining common event-driven architecture patterns like event notifications and event sourcing. It then discusses brokering the complexity of event-driven architectures by describing how events are communicated between producers and consumers via channels. The document outlines what information should be included in events like payloads and headers. It also discusses the difference between orchestration and choreography in event-driven systems. It provides an example of how events can be used to mediate changes within a system using order validation. Finally, it demonstrates how to test event-driven architectures using specifications and discusses accelerating API quality through testing tools that support multiple protocols and definitions.
GraphQL is an emerging API standard that provides a more flexible and alternative approach for data intensive operations. It is particularly good for querying and retrieving data in optimized forms that make applications more efficient and optimal. While GraphQL focuses on what it does best, we still need to ensure that our GraphQL services are exposed in a secure, controlled, monitored, and sometimes even in a monetized environment. This is where the inclusion of an API gateway that understands GraphQL queries, mutations, and subscriptions can add significant value.
This deck explores the following:
- Introduction to GraphQL
- Exposing GraphQL services as managed APIs
- Authentication
- Authorization
- Rate limiting
- Invoking GraphQL APIs exposed via WSO2 API Manager
Watch the webinar on-demand here - https://wso2.com/library/webinars/2019/11/exposing-graphqls-as-managed-apis/
End-End Security with Confluent Platform confluent
(Vahid Fereydouny, Confluent) Kafka Summit SF 2018
Security and compliance are key concerns for many organizations today and it is very important that we can meet these requirements in our platform. This is also extremely critical for customers who are adopting Confluent cloud offerings, since moving the streaming platform to cloud exposes new security and governance issues.
In this session, we will discuss how Confluent is providing control and visibility to address these concerns and enable secure streaming platforms. We will cover the main pillars of IT security in access control (authentication, authorization), data confidentiality (encryption) and auditing.
This document provides an overview of Red Hat's middleware stack and how Spring Boot applications can be deployed on it. It discusses Red Hat middleware products like WildFly and KeyCloak, as well as OpenShift for Kubernetes-based application deployment. It also covers tools like Fabric8 for building and deploying Docker images to OpenShift and CE & Obsidian for integrating various products and generating quickstarts. Finally, it announces some demos of KeyCloak and Artemis integration with Spring Boot applications.
Kubernetes Connectivity to Cloud Native Kafka | Christina Lin and Evan Shorti...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you.
In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
THEFT-PROOF JAVA EE - SECURING YOUR JAVA EE APPLICATIONSMarkus Eisele
Security in applications is a never-ending story. Most of the knowledge about how to build secure applications is derived from knowledge and experience. And we've all done the same mistakes every Java EE developer does over and over again. But how to solve the real business requirements behind access and authorization with Java EE? Can I have a 15k rights matrix? Does that perform? How to secure the transport layer? How does session binding works? Can I implement 2-Factor-Authentication? And what about social integrations? This talk outlines the key capabilities of the Java EE platform and introduces the audience to additional frameworks and concepts which do help by implementing all kinds of security requirements in Java EE based applications.
Generic Objects - Bill Wei - ManageIQ Design Summit 2016ManageIQ
Generic objects allow for defining and creating new types of objects in ManageIQ that are not officially supported. This provides flexibility to model additional resources. Generic objects have attributes and relationships that are defined via generic object definitions. They can be accessed throughout ManageIQ and used in the UI, automation, and REST APIs. Current development focuses on CRUD operations for definitions and objects, with future work including improved querying, relationships, and other capabilities.
IPaaS 2.0: Fuse Integration Services (Robert Davies & Keith Babo)Red Hat Developers
Red Hat JBoss Fuse integration services delivers cloud-based integration based on OpenShift by Red Hat to deliver continuous delivery of tested, production-ready integration solutions. Utilizing a drag and drop, code-free UI and combining that with the integration power of Apache Camel, Fuse integration services is the next generation iPaaS. In this session, we'll walk you through why iPaaS is important, the current Fuse integration services roadmap, and the innovation happening in open source community projects to make this a reality.
Testing Event Driven Architectures: How to Broker the Complexity | Frank Kilc...HostedbyConfluent
This document discusses testing event-driven architectures. It begins by defining common event-driven architecture patterns like event notifications and event sourcing. It then discusses brokering the complexity of event-driven architectures by describing how events are communicated between producers and consumers via channels. The document outlines what information should be included in events like payloads and headers. It also discusses the difference between orchestration and choreography in event-driven systems. It provides an example of how events can be used to mediate changes within a system using order validation. Finally, it demonstrates how to test event-driven architectures using specifications and discusses accelerating API quality through testing tools that support multiple protocols and definitions.
GraphQL is an emerging API standard that provides a more flexible and alternative approach for data intensive operations. It is particularly good for querying and retrieving data in optimized forms that make applications more efficient and optimal. While GraphQL focuses on what it does best, we still need to ensure that our GraphQL services are exposed in a secure, controlled, monitored, and sometimes even in a monetized environment. This is where the inclusion of an API gateway that understands GraphQL queries, mutations, and subscriptions can add significant value.
This deck explores the following:
- Introduction to GraphQL
- Exposing GraphQL services as managed APIs
- Authentication
- Authorization
- Rate limiting
- Invoking GraphQL APIs exposed via WSO2 API Manager
Watch the webinar on-demand here - https://wso2.com/library/webinars/2019/11/exposing-graphqls-as-managed-apis/
End-End Security with Confluent Platform confluent
(Vahid Fereydouny, Confluent) Kafka Summit SF 2018
Security and compliance are key concerns for many organizations today and it is very important that we can meet these requirements in our platform. This is also extremely critical for customers who are adopting Confluent cloud offerings, since moving the streaming platform to cloud exposes new security and governance issues.
In this session, we will discuss how Confluent is providing control and visibility to address these concerns and enable secure streaming platforms. We will cover the main pillars of IT security in access control (authentication, authorization), data confidentiality (encryption) and auditing.
This document provides an overview of Red Hat's middleware stack and how Spring Boot applications can be deployed on it. It discusses Red Hat middleware products like WildFly and KeyCloak, as well as OpenShift for Kubernetes-based application deployment. It also covers tools like Fabric8 for building and deploying Docker images to OpenShift and CE & Obsidian for integrating various products and generating quickstarts. Finally, it announces some demos of KeyCloak and Artemis integration with Spring Boot applications.
Kubernetes Connectivity to Cloud Native Kafka | Christina Lin and Evan Shorti...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you.
In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
Cloud native policy enforcement with Open Policy AgentLibbySchulze
This document provides an introduction to Open Policy Agent (OPA), an open source general purpose policy engine. It discusses how OPA can help manage policy in increasingly distributed systems by providing a unified toolset for defining and enforcing policies across the stack. Key points include:
- OPA decouples policy from application logic and allows policies to be written and tested using the declarative Rego language.
- OPA has a vibrant community with many integrations and production users, and is commonly used for use cases like Kubernetes admission control and microservice authorization.
- The document provides examples of how OPA can be used to enforce policies for systems like Kubernetes through validating admission controllers.
- Options for deploying
Accelerating Innovation with Apache Kafka, Heikki Nousiainen | Heikki Nousiai...HostedbyConfluent
Being a pioneer in the interactive gaming industry, SONY PlayStation has played a vital role in implementing technological advancements thus help bringing global video gaming community together. With the recent launch of next generation console PS-5 into the market by partnering with thousands of game developers and millions of video gamers across the globe, humongous volumes of data generation in playstation servers is quite inevitable. This presentation talks about how we leveraged big data technologies along with Apache Kafka to solve some of the realtime data analytical problems. Two important case studies we carryout recently are: ""Competitive pricing analysis of game titles across online video game marketplaces"" & ""understand the gamers sentiment by streaming data from social feeds and perform NLP""
Along with Apache Kafka, the technologies that we have used to architect the solution are: REST API, ZooKeeper, D3.js visualization, DoMo, Python, SQL, NLP, AWS Cloud & JSON.
This document discusses Infrastructure as a Service (IaaS) and Software Defined Networking (SDN).
IaaS allows consumers to provision computing resources like servers, storage, and networking and deploy their own operating systems and applications. The consumer does not manage the underlying cloud infrastructure. SDN abstracts traditional network equipment by separating the control and data planes, using a centralized controller and open standards like OpenFlow. This allows network configuration through software instead of dedicated hardware. The document then provides examples of how IaaS resources and SDN architecture could be implemented in a cloud computing environment.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Server Sent Events using Reactive Kafka and Spring Web flux | Gagan Solur Ven...HostedbyConfluent
Server-Sent Events (SSE) is a server push technology where clients receive automatic server updates through the secure http connection. SSE can be used in apps like live stock updates, that use one way data communications and also helps to replace long polling by maintaining a single connection and keeping a continuous event stream going through it. We used a simple Kafka producer to publish messages onto Kafka topics and developed a reactive Kafka consumer by leveraging Spring Webflux to read data from Kafka topic in non-blocking manner and send data to clients that are registered with Kafka consumer without closing any http connections. This implementation allows us to send data in a fully asynchronous & non-blocking manner and allows us to handle a massive number of concurrent connections. We’ll cover:
•Push data to external or internal apps in near real time
•Push data onto the files and securely copy them to any cloud services
•Handle multiple third-party apps integrations
This document discusses using NGINX as an API gateway for microservices architectures. It describes how NGINX can provide essential API gateway functions like API routing, authentication, overload protection, and request tracing in a lightweight and efficient manner. The document advocates for separating the roles of a secure proxy and API gateway to handle north-south and east-west traffic respectively. Key API gateway capabilities of NGINX like API routing, authentication using API keys or JWT, and request tracing are demonstrated with code examples.
Real-Time ETL in Practice with WSO2 Enterprise IntegratorWSO2
The availability of timely information and data are critical for modern enterprises. Delays in minutes are not acceptable for many cases and data needs to be available in realtime. However, legacy systems that can’t generate data streams that can be consumed in real-time still exist. These legacy systems emit their output as statics data stores such as files or DB tables. Integrating these static data sources in realtime is crucial and this is where real-time ETL comes into rescue.
This deck explores how WSO2 Streaming Integrator can be used for real-time ETL with techniques such as change data capture and file streaming.
Watch the webinar on-demand here - https://wso2.com/library/webinars/2020/03/real-time-etl-in-practice-with-wso2-enterprise-integrator/
During this talk, speaker provided a detailed overview of the Elasticsearch search system, gave an insight into offline search tools, and suggested how to fine-tune Elasticsearch depending on specific goals.
This presentation by Mykhailo Brodskyi (Senior Software Engineer, Сonsultant, GlobalLogic, Kharkiv), was delivered at GlobalLogic Kharkiv Java Conference 2018 on June 10, 2018.
Azure Cosmos DB Kafka Connectors | Abinav Rameesh, MicrosoftHostedbyConfluent
The document discusses Kafka connectors for Cosmos DB that allow for seamless integration between the two services without requiring complex application code. It provides an overview of Kafka Connect and connectors, use cases for integrating Cosmos DB and Kafka, and the architecture of source and sink connectors that can read from and write to Cosmos DB and Kafka. It also previews a demo of the connectors and suggests ways to take integration further.
New Chargeback - Sergio Ocon - ManageIQ Design Summit 2016ManageIQ
This document proposes a new chargeback system in ManageIQ to accurately report the financial costs of virtualization and cloud environments. It would integrate with other systems like data warehouses, ERP systems, and billing to understand infrastructure costs and analyze the economic impact of different actions. Costs would be tracked and grouped by activity centers like instances, storage, networks etc. and modifiers like support levels. Expenses would track costs associated with users and services. The system would provide consumption intelligence, budgeting, rating, billing and external cost integration to help strategically manage costs.
Platforms-as-a-service provide a fantastic application developer experience, enabling large scale zero downtime deployments in a repeatable and scalable way. But Data services are often left behind and require manual deployment and day 2 operations. The next evolution in PaaS provides a range of managed services such as DataStax Cassandra for developers to quickly utilise in their Cloud Native Applications.
This talk describes the approach and challenges of building managed services such as DataStax Enterprise Cassandra with automated lifecycle management using BOSH & Pivotal Cloud Foundry including a detailed discussion of the ease of Day 2 operations such as software upgrades and backups that is supported in the offering.
The presentation includes a demonstration on the use of BOSH and Pivotal Cloud Foundry to build a managed DataStax Enterprise Cassandra service that allows operators to provide a comprehensive Cassandra offering that deploys production ready clusters.
About the Speakers
Ben Lackey Partner Architect, DataStax
I work in the Cloud Strategy group at DataStax where I concentrate on improving the integration between DataStax Enterprise and cloud platforms including Azure, GCP and Pivotal.
Damian O'connor Product Manager, Pivotal
I'm a Technical Product Manager working with Pivotal's Cloud Services team and based out of our Dublin office. My role is to provide Pivotal Cloud Foundry customers with an industry leading Cassandra service running on the Pivotal Cloud Native platform.
Using Redis Streams To Build Event Driven Microservices And User Interface In...Redis Labs
The document summarizes Bobby Calderwood's presentation on using Redis Streams to build event-driven microservices and user interfaces in Clojure(Script). The presentation covers how Redis Streams were used to facilitate asynchronous processing and distributed consistency for a customer project. It also discusses how Carmine, the Clojure Redis client, was updated to support Redis Streams shortly after their release. The presentation concludes with a demo of how Redis Streams can be used to retrofit an existing system with asynchronous integration.
Understanding Kafka Produce and Fetch api calls for high throughtput applicat...HostedbyConfluent
The data team at Cloudflare uses Kafka to process tens of petabytes a day. All this data is moved using the 2 foundational Kafka api calls: Produce (api key 0) and Fetch (api key 1). Understanding the structure of these calls (and of the underlying RecordSet structure) is key to building high throughput clients.
The talk describes the basics of the Kafka wire protocol (api keys, correlation id), and the structure of the Produce and Fetch calls. It shows how the asynchronous nature of the wire protocol can combine with the structure of the Produce and Fetch calls to increase latency and reduce client throughput; a solution is offered through use of synchronous single-partition calls.
The RecordSet structure, which is used to encode and store sets (batches) of records is described, and its implications on Fetch requests are discussed. The relationship between Fetch api calls and ""consume"" operations is discussed, as is the impact of offset alignment to RecordSet boundaries.
Navigating the Ecosystem of Pivotal Cloud Foundry TilesAltoros
For application developers, PCF tiles are arguably the easiest way to run Redis, Elasticsearch, Cassandra, or any other backing service with applications in the cloud.
Azure Days 2019: Infrastructure as Code auf Azure (Jonas Wanninger & Daniel H...Trivadis
Heutzutage schreibt man nicht nur Applikationen mit Code. Dank der Cloud wird die Konfiguration von Infrastruktur wie virtuellen Maschinen oder Netzwerken in Code definiert und automatisiert ausgeliefert. Man spricht von Infrastructure as Code, kurz: IAC. Für Infrastructure as Code auf Azure gibt es viele tools wie Ansible, Puppet, Chef, etc. Zwei Lösungen stechen durch Ihren unterschiedlichen Ansatz heraus - Die Azure Resource Manager Templates (ARM) als Microsoft-native Lösung, immer auf dem neusten Stand, aber an Azure gebunden. Auf der anderen Seite Terraform von HashiCorp mit einer deskriptiven Sprache als Grundlage, dafür weniger Features im Security-Bereich. Für einen Grosskunden haben wir die beiden Technologien verglichen. Die Resultate zeigen wir in dieser Session mit Livedemos auf.
SpringBoot and Spring Cloud Service for MSAOracle Korea
Cloud 환경에서 MSA를 하기 위해서 Service Discovery, Circuit Breaker 등을 사용하여 Application을 개발하는 방법과 SpringBoot 와 Spring Cloud Service 를 사용하는데, Cloud에서 Kubernetes를 위시한 Container 생태계가 어떻게 MSA에 영향을 미치는지 알아봅니다.
0-330km/h: Porsche's Data Streaming Journey | Sridhar Mamella, PorscheHostedbyConfluent
The auto industry is midst a data revolution that is transforming how companies do business. Once a scarce resource, data has now become abundant and cheap. What are the new technologies that change the way we produce, collect, process, store, and analyze data. What new streams of data are being created with Industry 4.0 and the Internet of Things on the horizon, is there significant value to taking a strategic approach to Fast Data. How is Porsche building the next level Data Streaming Platform with open source technologies and how we are using CI/CD pipelines amongst others in order to serve our use cases.
Monoliths to Microservices with Jave EE and Spring BootTiera Fann, MBA
This document summarizes a hands-on workshop about transforming monolithic applications into microservices using Java EE and Spring Boot frameworks on OpenShift. The workshop will demonstrate how to use the "strangling the monolith" approach to incrementally extract services from a monolithic codebase into independent microservices deployed on OpenShift. Attendees will learn how to implement microservices using WildFly Swarm and Spring Boot, add concerns like health checks and configuration, and discuss best practices for migrating applications to microservices.
This talk cover the principles and the best practices in creation of flexible Microsoft .NET Core-based applications in connection with Microsoft Azure services, as well as tools and libraries that greatly simplify the development, configuration and deployment of applications. Also, attention paid to some pitfalls that may be encountered while using .NET Core.
This presentation by Andrii Antilikatorov, Consultant at GlobalLogic Kharkiv, was delivered at GlobalLogic Kharkiv MS TechTalk #2 on November 4, 2017.
Lessons from the field: Catalog of Kafka Deployments | Joseph Niemiec, ClouderaHostedbyConfluent
Streaming architectures have been on the rise steadily and as a result, we have seen the adoption of Kafka go up too. With the diverse spread of use cases across multiple industries, we have seen a variety of Kafka deployments across our hundreds of Kafka customers. Along the way, we have learnt some best practices as well as what not to do in mission-critical architectures. Join Joe Niemiec, Sr. Product Manager at Cloudera, as he shares these insights in this session that covers topics such as - The many ways that Kafka has been deployed in the field Standalone clusters, multiple clusters in a single data center and multiple clusters geographically distributed performing replication Clusters of all sizes small and large, few messages to hundreds of thousands per second Discussion about architecture failure domains Configurations tuned and used in specific deployments
Security enforcement of Java Microservices with Apiman & KeycloakCharles Moulliard
This document summarizes approaches for securing Java microservice applications at different levels:
1) The endpoint level using frameworks like Spring Security or interceptors to apply authentication and authorization.
2) The web container level by applying constraints to restrict access to resources based on roles.
3) An external API management layer that acts as a proxy, enforcing centralized policies before requests reach endpoints.
Cloud native policy enforcement with Open Policy AgentLibbySchulze
This document provides an introduction to Open Policy Agent (OPA), an open source general purpose policy engine. It discusses how OPA can help manage policy in increasingly distributed systems by providing a unified toolset for defining and enforcing policies across the stack. Key points include:
- OPA decouples policy from application logic and allows policies to be written and tested using the declarative Rego language.
- OPA has a vibrant community with many integrations and production users, and is commonly used for use cases like Kubernetes admission control and microservice authorization.
- The document provides examples of how OPA can be used to enforce policies for systems like Kubernetes through validating admission controllers.
- Options for deploying
Accelerating Innovation with Apache Kafka, Heikki Nousiainen | Heikki Nousiai...HostedbyConfluent
Being a pioneer in the interactive gaming industry, SONY PlayStation has played a vital role in implementing technological advancements thus help bringing global video gaming community together. With the recent launch of next generation console PS-5 into the market by partnering with thousands of game developers and millions of video gamers across the globe, humongous volumes of data generation in playstation servers is quite inevitable. This presentation talks about how we leveraged big data technologies along with Apache Kafka to solve some of the realtime data analytical problems. Two important case studies we carryout recently are: ""Competitive pricing analysis of game titles across online video game marketplaces"" & ""understand the gamers sentiment by streaming data from social feeds and perform NLP""
Along with Apache Kafka, the technologies that we have used to architect the solution are: REST API, ZooKeeper, D3.js visualization, DoMo, Python, SQL, NLP, AWS Cloud & JSON.
This document discusses Infrastructure as a Service (IaaS) and Software Defined Networking (SDN).
IaaS allows consumers to provision computing resources like servers, storage, and networking and deploy their own operating systems and applications. The consumer does not manage the underlying cloud infrastructure. SDN abstracts traditional network equipment by separating the control and data planes, using a centralized controller and open standards like OpenFlow. This allows network configuration through software instead of dedicated hardware. The document then provides examples of how IaaS resources and SDN architecture could be implemented in a cloud computing environment.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Server Sent Events using Reactive Kafka and Spring Web flux | Gagan Solur Ven...HostedbyConfluent
Server-Sent Events (SSE) is a server push technology where clients receive automatic server updates through the secure http connection. SSE can be used in apps like live stock updates, that use one way data communications and also helps to replace long polling by maintaining a single connection and keeping a continuous event stream going through it. We used a simple Kafka producer to publish messages onto Kafka topics and developed a reactive Kafka consumer by leveraging Spring Webflux to read data from Kafka topic in non-blocking manner and send data to clients that are registered with Kafka consumer without closing any http connections. This implementation allows us to send data in a fully asynchronous & non-blocking manner and allows us to handle a massive number of concurrent connections. We’ll cover:
•Push data to external or internal apps in near real time
•Push data onto the files and securely copy them to any cloud services
•Handle multiple third-party apps integrations
This document discusses using NGINX as an API gateway for microservices architectures. It describes how NGINX can provide essential API gateway functions like API routing, authentication, overload protection, and request tracing in a lightweight and efficient manner. The document advocates for separating the roles of a secure proxy and API gateway to handle north-south and east-west traffic respectively. Key API gateway capabilities of NGINX like API routing, authentication using API keys or JWT, and request tracing are demonstrated with code examples.
Real-Time ETL in Practice with WSO2 Enterprise IntegratorWSO2
The availability of timely information and data are critical for modern enterprises. Delays in minutes are not acceptable for many cases and data needs to be available in realtime. However, legacy systems that can’t generate data streams that can be consumed in real-time still exist. These legacy systems emit their output as statics data stores such as files or DB tables. Integrating these static data sources in realtime is crucial and this is where real-time ETL comes into rescue.
This deck explores how WSO2 Streaming Integrator can be used for real-time ETL with techniques such as change data capture and file streaming.
Watch the webinar on-demand here - https://wso2.com/library/webinars/2020/03/real-time-etl-in-practice-with-wso2-enterprise-integrator/
During this talk, speaker provided a detailed overview of the Elasticsearch search system, gave an insight into offline search tools, and suggested how to fine-tune Elasticsearch depending on specific goals.
This presentation by Mykhailo Brodskyi (Senior Software Engineer, Сonsultant, GlobalLogic, Kharkiv), was delivered at GlobalLogic Kharkiv Java Conference 2018 on June 10, 2018.
Azure Cosmos DB Kafka Connectors | Abinav Rameesh, MicrosoftHostedbyConfluent
The document discusses Kafka connectors for Cosmos DB that allow for seamless integration between the two services without requiring complex application code. It provides an overview of Kafka Connect and connectors, use cases for integrating Cosmos DB and Kafka, and the architecture of source and sink connectors that can read from and write to Cosmos DB and Kafka. It also previews a demo of the connectors and suggests ways to take integration further.
New Chargeback - Sergio Ocon - ManageIQ Design Summit 2016ManageIQ
This document proposes a new chargeback system in ManageIQ to accurately report the financial costs of virtualization and cloud environments. It would integrate with other systems like data warehouses, ERP systems, and billing to understand infrastructure costs and analyze the economic impact of different actions. Costs would be tracked and grouped by activity centers like instances, storage, networks etc. and modifiers like support levels. Expenses would track costs associated with users and services. The system would provide consumption intelligence, budgeting, rating, billing and external cost integration to help strategically manage costs.
Platforms-as-a-service provide a fantastic application developer experience, enabling large scale zero downtime deployments in a repeatable and scalable way. But Data services are often left behind and require manual deployment and day 2 operations. The next evolution in PaaS provides a range of managed services such as DataStax Cassandra for developers to quickly utilise in their Cloud Native Applications.
This talk describes the approach and challenges of building managed services such as DataStax Enterprise Cassandra with automated lifecycle management using BOSH & Pivotal Cloud Foundry including a detailed discussion of the ease of Day 2 operations such as software upgrades and backups that is supported in the offering.
The presentation includes a demonstration on the use of BOSH and Pivotal Cloud Foundry to build a managed DataStax Enterprise Cassandra service that allows operators to provide a comprehensive Cassandra offering that deploys production ready clusters.
About the Speakers
Ben Lackey Partner Architect, DataStax
I work in the Cloud Strategy group at DataStax where I concentrate on improving the integration between DataStax Enterprise and cloud platforms including Azure, GCP and Pivotal.
Damian O'connor Product Manager, Pivotal
I'm a Technical Product Manager working with Pivotal's Cloud Services team and based out of our Dublin office. My role is to provide Pivotal Cloud Foundry customers with an industry leading Cassandra service running on the Pivotal Cloud Native platform.
Using Redis Streams To Build Event Driven Microservices And User Interface In...Redis Labs
The document summarizes Bobby Calderwood's presentation on using Redis Streams to build event-driven microservices and user interfaces in Clojure(Script). The presentation covers how Redis Streams were used to facilitate asynchronous processing and distributed consistency for a customer project. It also discusses how Carmine, the Clojure Redis client, was updated to support Redis Streams shortly after their release. The presentation concludes with a demo of how Redis Streams can be used to retrofit an existing system with asynchronous integration.
Understanding Kafka Produce and Fetch api calls for high throughtput applicat...HostedbyConfluent
The data team at Cloudflare uses Kafka to process tens of petabytes a day. All this data is moved using the 2 foundational Kafka api calls: Produce (api key 0) and Fetch (api key 1). Understanding the structure of these calls (and of the underlying RecordSet structure) is key to building high throughput clients.
The talk describes the basics of the Kafka wire protocol (api keys, correlation id), and the structure of the Produce and Fetch calls. It shows how the asynchronous nature of the wire protocol can combine with the structure of the Produce and Fetch calls to increase latency and reduce client throughput; a solution is offered through use of synchronous single-partition calls.
The RecordSet structure, which is used to encode and store sets (batches) of records is described, and its implications on Fetch requests are discussed. The relationship between Fetch api calls and ""consume"" operations is discussed, as is the impact of offset alignment to RecordSet boundaries.
Navigating the Ecosystem of Pivotal Cloud Foundry TilesAltoros
For application developers, PCF tiles are arguably the easiest way to run Redis, Elasticsearch, Cassandra, or any other backing service with applications in the cloud.
Azure Days 2019: Infrastructure as Code auf Azure (Jonas Wanninger & Daniel H...Trivadis
Heutzutage schreibt man nicht nur Applikationen mit Code. Dank der Cloud wird die Konfiguration von Infrastruktur wie virtuellen Maschinen oder Netzwerken in Code definiert und automatisiert ausgeliefert. Man spricht von Infrastructure as Code, kurz: IAC. Für Infrastructure as Code auf Azure gibt es viele tools wie Ansible, Puppet, Chef, etc. Zwei Lösungen stechen durch Ihren unterschiedlichen Ansatz heraus - Die Azure Resource Manager Templates (ARM) als Microsoft-native Lösung, immer auf dem neusten Stand, aber an Azure gebunden. Auf der anderen Seite Terraform von HashiCorp mit einer deskriptiven Sprache als Grundlage, dafür weniger Features im Security-Bereich. Für einen Grosskunden haben wir die beiden Technologien verglichen. Die Resultate zeigen wir in dieser Session mit Livedemos auf.
SpringBoot and Spring Cloud Service for MSAOracle Korea
Cloud 환경에서 MSA를 하기 위해서 Service Discovery, Circuit Breaker 등을 사용하여 Application을 개발하는 방법과 SpringBoot 와 Spring Cloud Service 를 사용하는데, Cloud에서 Kubernetes를 위시한 Container 생태계가 어떻게 MSA에 영향을 미치는지 알아봅니다.
0-330km/h: Porsche's Data Streaming Journey | Sridhar Mamella, PorscheHostedbyConfluent
The auto industry is midst a data revolution that is transforming how companies do business. Once a scarce resource, data has now become abundant and cheap. What are the new technologies that change the way we produce, collect, process, store, and analyze data. What new streams of data are being created with Industry 4.0 and the Internet of Things on the horizon, is there significant value to taking a strategic approach to Fast Data. How is Porsche building the next level Data Streaming Platform with open source technologies and how we are using CI/CD pipelines amongst others in order to serve our use cases.
Monoliths to Microservices with Jave EE and Spring BootTiera Fann, MBA
This document summarizes a hands-on workshop about transforming monolithic applications into microservices using Java EE and Spring Boot frameworks on OpenShift. The workshop will demonstrate how to use the "strangling the monolith" approach to incrementally extract services from a monolithic codebase into independent microservices deployed on OpenShift. Attendees will learn how to implement microservices using WildFly Swarm and Spring Boot, add concerns like health checks and configuration, and discuss best practices for migrating applications to microservices.
This talk cover the principles and the best practices in creation of flexible Microsoft .NET Core-based applications in connection with Microsoft Azure services, as well as tools and libraries that greatly simplify the development, configuration and deployment of applications. Also, attention paid to some pitfalls that may be encountered while using .NET Core.
This presentation by Andrii Antilikatorov, Consultant at GlobalLogic Kharkiv, was delivered at GlobalLogic Kharkiv MS TechTalk #2 on November 4, 2017.
Lessons from the field: Catalog of Kafka Deployments | Joseph Niemiec, ClouderaHostedbyConfluent
Streaming architectures have been on the rise steadily and as a result, we have seen the adoption of Kafka go up too. With the diverse spread of use cases across multiple industries, we have seen a variety of Kafka deployments across our hundreds of Kafka customers. Along the way, we have learnt some best practices as well as what not to do in mission-critical architectures. Join Joe Niemiec, Sr. Product Manager at Cloudera, as he shares these insights in this session that covers topics such as - The many ways that Kafka has been deployed in the field Standalone clusters, multiple clusters in a single data center and multiple clusters geographically distributed performing replication Clusters of all sizes small and large, few messages to hundreds of thousands per second Discussion about architecture failure domains Configurations tuned and used in specific deployments
Security enforcement of Java Microservices with Apiman & KeycloakCharles Moulliard
This document summarizes approaches for securing Java microservice applications at different levels:
1) The endpoint level using frameworks like Spring Security or interceptors to apply authentication and authorization.
2) The web container level by applying constraints to restrict access to resources based on roles.
3) An external API management layer that acts as a proxy, enforcing centralized policies before requests reach endpoints.
High Availability - Brett Thurber - ManageIQ Design Summit 2016ManageIQ
This document discusses high availability options for ManageIQ/CloudForms, including traditional active/passive HA clustering, pglogical for database replication, BDR for multi-master database replication across sites, and using containers and Kubernetes for service distribution and availability. While traditional HA is complex, newer technologies like pglogical, BDR and containers may provide simpler paths to high availability in the future.
Apache CXF 3.0 introduces major refactoring and improvements to deployment options, REST/JAX-RS support, security features, and out-of-the-box services. Key changes include implementing JAX-RS 2.0, upgrading OSGi and Blueprint support, adding security services like XKMS and improving the WS-Security implementation. The goal is to release Apache CXF 3.0 by the end of April 2013 after milestones and testing are completed.
Single Sign On (SSO) allows a user to authenticate once and gain access to multiple related systems without re-authenticating. SSO uses protocols like SAML and OAuth to issue authentication tokens after initial login. SAML is an XML-based standard that transfers user identity and attribute data between an identity provider and service provider using assertions. Metadata ensures secure transactions by allowing providers to look up authentication endpoints and validate digital signatures. The SSO workflow involves a user authenticating with an identity provider, which issues a token for the user to access a service provider. Major SSO providers include Microsoft, IBM, Red Hat, and ForgeRock.
SambaXP 2014: Trusting Active Directory with FreeIPA: a story beyond SambaAlexander Bokovoy
This document discusses integrating FreeIPA with Active Directory through cross-forest trusts. It describes how FreeIPA provides identity management similar to Active Directory and can be configured to trust an Active Directory domain. This allows FreeIPA and Active Directory users to access each other's services. The document also discusses how legacy systems without SSSD can still access user and group information by querying a compatibility LDAP tree on the FreeIPA server. It concludes by noting that FreeIPA passed over 100 compatibility tests with Windows Server 2012.
Enhancing The Role Of A Large Us Federal Agency As An Intermediary In The Fed...Wen Zhu
The document discusses enhancing the role of a large US federal agency as an intermediary in the federal supply chain through the use of a service registry and a JBI-based ESB. It provides context on the environment and challenges, defines key terms like service registry and JBI, and describes the agency's experience implementing a service registry and how it can better function as a broker of services between providers and consumers in the federal supply chain.
Authentication and authorization to the AWS management console using your on-premise Active Directory isn't all that straightforward, at first. This deck covers the easily adaptable and scalable methodology we created and have been following over the past year, leveraging our existing IdP and adhering to strict conventions.
OpenShift V3 is a container-based platform that uses Docker and Kubernetes to deploy and manage containers. It allows for various deployment types including rolling deployments and blue-green deployments. OpenShift V3 provides a number of services for developing, deploying and managing applications, and integrates with DevOps practices like CI/CD. It utilizes containerization, microservices and a PaaS model to provide a way for organizations to build and run scalable applications.
The document discusses concerns around designing microservices. It begins with an overview of microservices, noting they are an approach to building distributed applications that are independently developed and deployed, contain a single context or responsibility, and communicate simply using technologies like HTTP or message queues. The document then covers advantages like loose coupling, ability to use the right tool for each job, and facilitating continuous delivery. It also discusses challenges like the complexity of distributed systems and potential for services to fail. Finally, it outlines considerations for service design, including having each service represent a single bounded context and authority, being resilient, fast, and efficient.
The OSGi Service Platform in Integrated Management Environments - Cristina Di...mfrancis
Managing OSGi platforms and applications using JMX provides an integrated management environment. JMX can manage OSGi entities that are dynamically mapped and exposed as JMX MBeans. This allows leveraging existing JMX management tools and integrating with network protocols like SNMP. While JMX and OSGi overlap in some areas, they can be seen as complementary for managing OSGi in home gateway environments. Future work includes improving OSGi to JMX mappings and supporting application provisioning and reconfiguration.
Apache Camel Introduction & What's in the boxClaus Ibsen
Slides from JavaBin talk in Grimstad Norway, presented by Claus Ibsen in February 2016.
This slide deck is full up to date with latest Apache Camel 2.16.2 release and includes additional slides to present many of the features that Apache Camel provides out of the box.
The document summarizes Cloud Foundry roadmap highlights for 2016, including upcoming features like the use of Ceph storage and a rearchitected elastic runtime. It also outlines the Cloud Foundry Summit event in September 2016 with over 100 sessions and 63 foundation members. The document details plans to simplify BOSH deployment manifests and allow service brokers to provision on-demand BOSH services.
This presentation explains the new challenges to be resolved with a Microservices Architecture and how the WildFly Swarm container & OpenShift/Kubernetes can address some of the patterns like running a lightweight JavaEE container, discover and load balance the services, inject the configuration of the services.
NTNU Tech Talks : Smartening up a Pi Zero Security Camera with Amazon Web Ser...Mark West
1. The document discusses improving a Raspberry Pi Zero security camera system by addressing issues with false positives from the initial motion detection software.
2. An initial version used Motion software for activity detection and sending alerts but had problems with false alarms from things like moving trees.
3. A second version analyzes images using Amazon Rekognition to identify if a human is present before sending alerts, reducing false positives. It also implements the system using AWS Lambda functions and Step Functions for improved scalability and manageability.
Check out the talk to the slides:
http://bit.ly/1ReY8uJ
Talk Abstract:
Using Swarm, you can select “just enough app server” to support each of your microservices.
In this session, we’ll outline how WildFly Swarm works and get you started writing your first microservices using Java EE technologies you’re already familiar with.
You’ll learn how to setup your build system (Maven, Gradle, or your IDE of choice) to run and test WildFly Swarm-based services and produce runnable jars. We will walk from the simple case of wrapping a normal WAR application to the more advanced case of configuring the container using your own main(…) method.
WildFly Swarm: Criando Microservices com Java EE 7George Gastaldi
O documento discute a criação de microserviços com Java EE 7. Apresenta microserviços como desacoplados e com ciclos de liberação independentes. Discutem como Java EE pode ser usado para criar microserviços através do WildFly Swarm, permitindo escolher apenas os serviços necessários e criar jars auto-contidos. Demonstra como frações podem definir dependências para subsistemas não incluídos ou desativados no WildFly.
Apache ActiveMQ, Camel, CXF and ServiceMix OverviewMarcelo Jabali
The document provides an overview of several Apache open source integration and messaging projects: Apache ActiveMQ, Apache Camel, Apache CXF, and Apache ServiceMix. It outlines the agenda and introduces the presenter, Marcelo Jabali of FuseSource. It then proceeds to describe ActiveMQ as a high performance, reliable messaging fabric supporting JMS, C, .NET and other frameworks. It also summarizes the fundamentals of JMS, including its two messaging models: point-to-point and publish/subscribe.
This document introduces PagerDuty Process Automation using Rundeck. It discusses how Rundeck is a service orchestration and automation platform that PagerDuty acquired in 2020. It provides an overview of Rundeck's capabilities including 120+ plugins, event-driven workflows, auditing, and self-service access. The document discusses how Rundeck can be used to automate incident response, remediation, and other tasks to improve MTTR, support efficiency, and reduce manual work. Customer examples show how Rundeck standardizes workflows and allows non-experts to complete tasks previously requiring specialized knowledge.
Credential store using HashiCorp VaultMayank Patel
The document discusses HashiCorp Vault, which is a tool for securely managing secrets and sensitive data. It provides secure credential management and features like dynamic secrets, data encryption, leasing and key rotation, revocation, and audit controls. It integrates with databases, tools, and other systems. The presentation covers common challenges Vault aims to address, use cases, features, and includes a demo.
(HLS401) Architecting for HIPAA Compliance on AWS | AWS re:Invent 2014Amazon Web Services
Thank you for the summary. Here are a few thoughts:
- The presentation covered several important topics related to architecting systems for HIPAA compliance on AWS, including shared responsibility models, eligible services, configuration requirements, and case studies.
- Automating infrastructure deployment and change management was emphasized as important for maintaining compliance and auditability at scale. Emdeon's use of templates, CI/CD, and immutable infrastructure approaches were highlighted.
- A layered approach to responsibilities was discussed, with AWS and customers each accountable for different aspects. General technical safeguards like encryption are partly AWS responsibilities, while application-specific controls are customer responsibilities.
- Authentication, authorization, auditing and other controls need consideration at both the infrastructure
SQL Server 2017 will be available on Linux, providing customers choice in platforms. It will include the database engine, integration services and support for technologies like in-memory processing and always encrypted. The same SQL Server licenses can be used on Windows or Linux, with previews available free of charge. Early adopters can test SQL Server 2017 on Linux through a special program and provide feedback to Microsoft.
by Brad Dispensa, Sr. Solutions Architect, AWS
Operating a security practice on AWS brings many new challenges that haven't been faced in data center environments. The dynamic nature of infrastructure, the relationship between development team members and their applications, and the architecture paradigms have all changed as a result of building software on top of AWS. In this session we will cover how you can use secure configuration and automation to monitor, audit, and enforce your security policies within an AWS environment. Level 200
WSO2Con EU 2015: Case Study – Digital Transformation: To Monetise Business by...WSO2
WSO2Con EU 2015: Case Study – Digital Transformation: To Monetise Business by Building Elastic API Eco Systems
In a world where Google & Amazon are battling it out for “Same Day Delivery Service Models”, an (r)e-tailer can either get their delivery strategy right or lose customers. This session will present how WSO2 helped one of the largest same day delivery companies to expand and transform their existing delivery business by creating an API platform for the (r)e-tailer. The APIs opened a new channel enabling increased engagement with their customers. They were also able to provide a multi-tenanted capability to each (r)e-tailer, keeping their data separate along with custom security mechanisms tailored for each one of the (r)e-tailer needs. To top things up, the entire WSO2 platform is hosted on the cloud, enabling easy scale for the spikes during sales seasons, so that you as a customer get your delivery in time for the special occasion.
Presenter:
Ashish Mital
Principal Architect
Aditi Technologies
This document discusses serverless API management on AWS. It begins with an overview of serverless API management and describes a sample timelapse service use case. It then covers the basics of API management on AWS including validation, transformation, throttling, caching, security and monetization. It also discusses DevOps practices for serverless APIs such as CI/CD pipelines and infrastructure as code. Finally, it briefly mentions event-driven "AsyncAPI" management and concludes.
Login information and group memberships (identity) often are centrally managed in Enterprises. Many systems use this information to, for example, achieve Single Sign On (SSO) functionality. Surprisingly, access to the Weblogic Server Console and applications is often not centrally managed. I will explain why centralizing management of these identities, in addition to increased security, quickly starts reducing operational cost and even increases developer productivity. During a demonstration, I will introduce several methods for debugging authentication using an external authentication provider in order to lower the bar to apply this pattern. This technically oriented presentation is especially useful for people working in operations managing Weblogic Servers.
Cloud native applications offer scalability, flexibility, and optimal use of compute resources. Serverless functions interacting through events, leveraging cloud capabilities for persistent storage and automated operations take organization to the next level in IT. This session demonstrates polyglot Functions interacting with native cloud services for events and persistence (Object Storage and NoSQL Database) and leveraging the Key and Secrets Vault, Monitoring and Notifications services for operational control. A lightweight API Gateway is used to expose APIs to external consumers. Infrastructure as Code is the guiding principle in deploying both cloud resources and application components, through OCI CLI and Terraform. This session leverages many cloud native (enabling) services in Oracle Cloud Infrastructure. The session will introduce concepts, then spend most of the time on live demonstrations. All sources are shared with the audience, to allow participants to create the same application in their own cloud tenancy.
What is so great about Cloud Native Applications? How do you create one? I will explain the first and demonstrate the second. On Oracle Cloud Infrastructure, using services that anyone can use for free, I will live create a cloud native application that streams, persists, notifies, scales, monitors
Microservices, DevOps and IoT- Bob FamiliarWithTheBest
This document discusses microservices, DevOps, and IoT. It describes how lean engineering, DevOps, microservices, and cloud platforms form the foundation of modern software methodology and architecture. It provides an overview of key concepts like DevOps, microservices, and how various Azure services can be used to support an IoT solution involving devices, data processing, analytics, and APIs. Diagrams depict components of an example IoT solution architecture including devices, data handling, microservices, security, and DevOps tools.
An overview of how electronic signature objects are generated and used within PDF documents including the overview of Aodbe LiveCycle ES's ability to programmatically work with them server side.
[AzureCamp 24 Juin 2014] Des services en frontal par Benjamin Guinebertière e...Microsoft Technet France
This document discusses an API management platform and portal that provides tools and services for both developers and administrators. It includes features such as self-registration, subscriptions, documentation, issue tracking, analytics reporting, security controls, caching, throttling, and transformations. The platform uses technologies such as Nginx, Varnish, and Azure API apps to proxy and manage APIs. It encourages attendees of a Microsoft Azure event to sign up for a hands-on session to learn more.
Pragmatic Security Automation for CloudPriyanka Aash
Everything in cloud computing is automated and API-enabled, giving security teams a big opportunity to build and embed security into infrastructures. From continuous guardrails to automated "afterburners" to speed up complex processes, this advanced session leverages the latest software-defined security techniques and shows how to integrate automation. Be prepared for demos, design patterns and a little code.
(Source: RSA Conference USA 2018)
Alabama CyberNow 2018: Cloud Hardening and Digital Forensics ReadinessToni de la Fuente
This document provides an overview of digital forensics and security in the cloud. It discusses common attacks such as access key compromise and misconfigured services. It also outlines an incident response workflow and tools that can be used to acquire evidence from AWS resources like EC2 instances, S3 buckets, and RDS databases. Finally, it discusses hardening strategies like using immutable infrastructure and auditing tools like Prowler to assess security configurations.
HashiCorp Vault configuration as code via HashiCorp Terraform- stories from t...Andrey Devyatkin
- Vault configuration as code via Terraform was discussed, including deployment, authentication, secrets engines, and integration considerations
- Key topics included deploying Vault in AWS using Terraform, configuring LDAP and AWS IAM authentication backends, and using the KV secrets engine for database credentials and temporary AWS credentials
- Challenges with keeping Terraform and Vault in sync were noted, such as state issues when Vault values are added outside of Terraform
WebSSO and Access Management with LemonLDAP::NGClément OUDOT
LemonLDAP::NG is a free and open-source web single sign-on (SSO) project that provides single sign-on and access management functionality using standard Apache installation. It allows for a single authentication point, dynamic application access lists, delegation of SSO, and uses LDAP for authentication, authorization, password management and more. LemonLDAP::NG is compatible with LDAP password policies and can authenticate users against various backends including LDAP, Kerberos, CAS, and SQL for centralized access control and management.
Similar to Authentication - Alberto Bellotti - ManageIQ Design Summit 2016 (20)
This document summarizes the Sprint 235 review meeting for the ManageIQ project. The meeting covered bug fixes and enhancements to the UI, providers, and platform. Key items discussed included fixing various tests, adding provider details to screens, updating container base images, and removing Gemfile locks from shipped gems. The sprint review wrapped up with questions and confirmation of the next sprint review meeting.
This document summarizes the Sprint 234 review meeting which took place on April 3, 2024. The meeting covered UI fixes and enhancements by Jeffrey Bonson, provider updates by Adam Grare, and platform changes by Joe Rafaniello such as adding region counts to audit reporting and upgrading dependencies. Bugs addressed include tagging and workflow credential issues while enhancements included updating UI components. Questions were invited for discussion with the next Sprint 235 review scheduled for April 17, 2024.
The document summarizes the Sprint 233 review meeting held on March 20, 2024. It includes:
- An overview of the meeting agenda and speakers for UI, Providers, and Platform updates
- Details of bugs fixed and enhancements implemented across the UI, Providers, and Platform areas during the sprint
- Questions and information about the next Sprint 234 review meeting
This document summarizes the Sprint 232 review meeting of March 6, 2024. The meeting covered bug fixes and enhancements to the UI, providers, and platform. Four speakers presented updates: Jason Frey provided an overview, Jeffrey Bonson discussed UI improvements, Adam Grare reviewed provider changes, and Joe Rafaniello outlined platform enhancements. Bugs addressed included hostname errors and incorrect action values. Enhancements included search bars and React conversions. Changes to Amazon, Kubernetes, Kubevirt, Ansible Tower, Cisco Intersight, and Workflows were also noted.
The document summarizes the Sprint 231 review meeting of the ManageIQ platform. It includes:
1. An overview of the meeting agenda covering UI, Providers, Platform, and API updates.
2. Details on bugs fixed and enhancements made to the UI, Providers, and Platform.
3. Questions from attendees and information on the next Sprint 232 review meeting.
This document summarizes the Sprint 230 review meeting for the ManageIQ project. The meeting covered bugs and technical debts across the UI, Providers, and Platform teams. Bugs included errors on EMS network text, service catalog errors, and typos. Technical debts addressed PR templates and catalog resources. Provider updates involved zones, snapshots, and targeted refreshes. Platform discussed container versions, Ruby/Rails upgrades, messaging, and role enabling. The next Sprint 231 review was scheduled.
This document summarizes the Sprint 229 review meeting for the ManageIQ project. It includes sections on bugs and enhancements for the UI, Providers, and Platform teams. The meeting discussed 6 bugs and 13 enhancements fixed in the UI, issues addressed for Ansible Tower, Floe, and Workflows providers, and improvements to orchestrator certificates, gem management, translations and testing for the Platform team. It concluded with next steps for the Sprint 230 review meeting.
The Sprint 228 Review meeting covered:
1. Bugs and enhancements completed during the sprint for the UI, providers, platform, and workflows. This included 6 UI bugs fixed and 3 UI enhancements completed.
2. Upcoming work for providers including deleting disks for failed clones on Google and moving feature checks to subclasses for Ovirt and VMware.
3. Platform enhancements and bugs including mounting messaging certificates, Kafka configuration, and Ruby 3.1 support.
This document summarizes the Sprint 227 review meeting. The meeting covered bug fixes and enhancements for the UI, providers, and platform. For the UI, issues addressed included permission fixes, error handling, and accessibility. Provider updates included dropping dependencies and pagination fixes. For the platform, changes involved removing a default feature and updating apt packages. The next Sprint 228 review is scheduled for January 10, 2023.
The Sprint 226 review meeting covered:
1. Bugs fixed in the UI, providers, and platform areas.
2. Enhancements made to the UI, providers, and platform including code updates.
3. Provider changes including updating Azure and VMware integrations.
The Sprint 225 Review meeting covered updates from the UI, Providers, and Platform teams. Key items included:
- The UI team fixed various bugs relating to missing toast notifications, accessibility issues, and table headers. They also updated JSON files and dropped Ruby 2.7 support.
- The Providers team refactored Amazon region specs and added AWS region syncing. For Nuage, they reverted the Xlab-si org name. Floe provider work included validation, error handling, and test improvements.
- The Platform team enhanced worker handling, added Ruby 3 support, updated translations, fixed messaging and gems, and removed unnecessary code.
The Sprint 224 review meeting covered:
1. An overview was provided by Jason Frey.
2. David Resende discussed fixes and enhancements to the UI, including refactoring components and introducing Ansible playbook payloads.
3. Adam Grare discussed provider updates, including fixing API pagination issues for Google and updating regions for Amazon.
4. Joe Rafaniello provided an update on platform work, including adding new resource pool attributes and dropping unused tools.
5. Keenan Brock noted an enhancement to the API involving dropping a lifecycle event table.
The document summarizes the Sprint 223 review meeting which took place on October 18, 2023. It includes sections on Bugs, UI, Providers, Platform, and API. Key details discussed include fixes to the UI to display alert descriptions and chargeback rates, provider specification additions and fixes for Lenovo, Oracle Cloud, and Redfish, workflow improvements for Floe, and platform enhancements around automation jobs and Ruby/Python support. The meeting concluded with questions and an announcement of the next Sprint 224 review on November 1, 2023.
The document summarizes the Sprint 222 review meeting for the ManageIQ project. It includes sections for UI, Providers, Platform, API, and questions. Key topics discussed were the recent Petrosian-1 release, several bug fixes and enhancements across UI, Providers, and Platform areas, and upcoming meetings.
This document summarizes the Sprint 221 review meeting which took place on September 20, 2023. The meeting covered bug fixes and enhancements across various components including the UI, providers, and platform. Specific issues that were addressed included fixing tenants list viewing, adding sorting options to chargeback, and converting collection forms from HAML to React. Presenters also provided updates on IBM CIC, Openstack, VMware, workflows, upgrading dependencies, and dropping Ems destroy callbacks. The next sprint review is scheduled for October 4, 2023.
The document summarizes the Sprint 220 review meeting that took place on September 6th, 2023. It discusses bugs, enhancements, and work done on the UI, providers, and platform during the sprint. Bugs addressed include package lockdowns, notification refactors, CI fixes. Enhancements included automate method conversions and chargeback rate tests. Work on providers focused on VMware and Amazon updates. Platform work involved messaging, Zeitwerk, certificates, and container upgrades. Questions were invited for discussion before information on the next sprint review.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
3. Early Days
Authentication implemented within Appliance
Local Database (Default)
MiqLDAP
LDAP Directories (OpenLDAP, RedHat Directory Server, etc.)
Active Directory
Amazon SDK
AWS IAM (Identity and Access Management)
4. Technology Shift
Local Authentication Implementations
Limited authentication types
Frequent fixes due to limitations
Longer enhancements implementation times.
External Authentication
Industry proven Apache Authentication stack
Wider availability of Authentication modules
Leveraging RHEL Security Services
12. Future Enhancements
Adding Support for:
Enhanced Client & Proxy
Enabling REST API authentication using SAML provider
Allowing authentication using SAML credentials in SSUI
Verification with additional SAML providers
Active Directory Federated Services
Script:
Login to KC
Show Realm
Show Client added
Show groups
Show users
Show in Client the links in MIQ
Show in Client the Assertions added
Login to Miq admin/smartvm
Change Authentication to Enable SAML
Logout
Show New login screen
Click on Login using corporate system
Login on keycloak using abellotti
Show user in Miq with groups.
Change Authentication to enable SSO
Logout
Talk about being able to login via admin/, click on Login using corporate system for KC
Talk about going to page directory (hitting Reload) auto redirects to KC
Login on keyclock using abellotti
Remove enable SSO but disable local login
Logout
Show that there is not Miq login screen, and just the KC login
Talk about use for environment where admins are centrally managed
Login as miqadmin
Talk about ability to re-enable local login, or if IDP Down
Demo Appliance Console
Change setting
Back to Miq, reload page
Logout
Show the Miq login screen