Extracting Value from IOT using Azure Cosmos DB, Azure Synapse Analytics and ...HostedbyConfluent
Due to explosion of IoT, we have streaming data that needs to be processed in real-time. This needs to be made available for applications as well as analytics scenarios such as anomaly detection. This workshop presents a solution using Confluent Cloud on Azure, Azure Cosmos DB and Azure Synapse Analytics which can be connected in a secure way within Azure VNET using Azure Private link configured on Kafka clusters.
Accelerating Innovation with Apache Kafka, Heikki Nousiainen | Heikki Nousiai...HostedbyConfluent
Being a pioneer in the interactive gaming industry, SONY PlayStation has played a vital role in implementing technological advancements thus help bringing global video gaming community together. With the recent launch of next generation console PS-5 into the market by partnering with thousands of game developers and millions of video gamers across the globe, humongous volumes of data generation in playstation servers is quite inevitable. This presentation talks about how we leveraged big data technologies along with Apache Kafka to solve some of the realtime data analytical problems. Two important case studies we carryout recently are: ""Competitive pricing analysis of game titles across online video game marketplaces"" & ""understand the gamers sentiment by streaming data from social feeds and perform NLP""
Along with Apache Kafka, the technologies that we have used to architect the solution are: REST API, ZooKeeper, D3.js visualization, DoMo, Python, SQL, NLP, AWS Cloud & JSON.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you. In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
Evolving the Engineering Culture to Manage Kafka as a Service | Kate Agnew, O...HostedbyConfluent
Embracing open source software for critical platform operations is a tough organizational evolution for a company of any size. This is particularly daunting for technology teams accustomed to a fully supported managed service. Come learn about how we are using OSS to modernize Health Care at UnitedHealth Group as a roadmap to adopt and offer OSS in your own organization!
Over the last three years, Kafka as a Service within UnitedHealth Group has gone from non-existent to being centrally managed and utilized by over 200 internal application teams as an essential component to our ecosystem. In this session, I will share how to tactically implement a Kafka as a Service platform offering within any organization with a very lean team and how to get broad adoption from engineers and leadership.
I'll discuss the engineering cultural changes needed, both on the DevOps team as well as more broadly, to adopt OSS. Spoiler: Documentation is the key to success. I will talk about some of our "aha" moments, including the importance of internal Terms of Service and how to encourage teams to "Google first." I will include things that haven't worked as well, such as requiring manual review of all topic creation PRs (this doesn't scale!).
Attendees will learn how to both stand up their own OSS offering as well as how to be a good internal consumer of other such offerings. Come ready to learn and laugh about my journey to offering OSS to thousands of people!
Testing Event Driven Architectures: How to Broker the Complexity | Frank Kilc...HostedbyConfluent
Quality Matters … and as event-driven architectures (EDA) become increasingly popular in the microservices space, ensuring the delivery and performance of your EDA increases in importance. But while it’s powerful architecture, it does come with its challenges, especially from a testing perspective. For example, most organizations are not reliant on Kafka alone, but a multitude of interconnected APIs like REST, GraphQL and gRPC. One of the questions that arise from this challenge: How do you build end-to-end tests when the APIs are completely different technologies—without relying on fragile scripts? In our talk, we’ll tackle this question and many more when it comes to the testing of Apache Kafka endpoints and your services architecture. We’ll cover what makes testing in EDA difficult; technologies that can help you; and how we at SmartBear are thinking about these testing problems and, most importantly, how we are trying to solve for them.
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
Extracting Value from IOT using Azure Cosmos DB, Azure Synapse Analytics and ...HostedbyConfluent
Due to explosion of IoT, we have streaming data that needs to be processed in real-time. This needs to be made available for applications as well as analytics scenarios such as anomaly detection. This workshop presents a solution using Confluent Cloud on Azure, Azure Cosmos DB and Azure Synapse Analytics which can be connected in a secure way within Azure VNET using Azure Private link configured on Kafka clusters.
Accelerating Innovation with Apache Kafka, Heikki Nousiainen | Heikki Nousiai...HostedbyConfluent
Being a pioneer in the interactive gaming industry, SONY PlayStation has played a vital role in implementing technological advancements thus help bringing global video gaming community together. With the recent launch of next generation console PS-5 into the market by partnering with thousands of game developers and millions of video gamers across the globe, humongous volumes of data generation in playstation servers is quite inevitable. This presentation talks about how we leveraged big data technologies along with Apache Kafka to solve some of the realtime data analytical problems. Two important case studies we carryout recently are: ""Competitive pricing analysis of game titles across online video game marketplaces"" & ""understand the gamers sentiment by streaming data from social feeds and perform NLP""
Along with Apache Kafka, the technologies that we have used to architect the solution are: REST API, ZooKeeper, D3.js visualization, DoMo, Python, SQL, NLP, AWS Cloud & JSON.
Exposing and Controlling Kafka Event Streaming with Kong Konnect Enterprise |...HostedbyConfluent
Event streaming allows companies to build more scalable and loosely coupled real-time applications supporting massive concurrency demands and simplifying the construction of services.
At the same time, API management provides capabilities to securely control the upstream services consumption, including the event processing infrastructure.
This session shows how Kong Konnect Enterprise can complement Kafka Event Streaming, exposing it to new and external consumers while applying specific and critical policies to control its consumption, including API key, OAuth/OIDC and others for authentication, rate limiting, caching, log processing, etc.
Kubernetes connectivity to Cloud Native Kafka | Evan Shortiss and Hugo Guerre...HostedbyConfluent
If you want to build an ecosystem of streaming data to your Kafka platform, you will need a much easier way for your developer to quickly move what’s on the source to your cluster. Better yet, making the connector serverless so it would NOT waste any resources for being idle, and having a trusted partner manage your Kafka infrastructure for you. In this session, we will show you how easy we have made streaming data with great user experience. Flexible resource management with our new secret weapon in the Apache Camel project -- Kamelet. We’ll also demonstrate how Red Hat OpenShift Streams for Apache Kafka simplifies the provisioning of Kafka deployments in a public cloud, managing the cluster,topics, and configuring secure access to the Kafka cluster for your developers.
Evolving the Engineering Culture to Manage Kafka as a Service | Kate Agnew, O...HostedbyConfluent
Embracing open source software for critical platform operations is a tough organizational evolution for a company of any size. This is particularly daunting for technology teams accustomed to a fully supported managed service. Come learn about how we are using OSS to modernize Health Care at UnitedHealth Group as a roadmap to adopt and offer OSS in your own organization!
Over the last three years, Kafka as a Service within UnitedHealth Group has gone from non-existent to being centrally managed and utilized by over 200 internal application teams as an essential component to our ecosystem. In this session, I will share how to tactically implement a Kafka as a Service platform offering within any organization with a very lean team and how to get broad adoption from engineers and leadership.
I'll discuss the engineering cultural changes needed, both on the DevOps team as well as more broadly, to adopt OSS. Spoiler: Documentation is the key to success. I will talk about some of our "aha" moments, including the importance of internal Terms of Service and how to encourage teams to "Google first." I will include things that haven't worked as well, such as requiring manual review of all topic creation PRs (this doesn't scale!).
Attendees will learn how to both stand up their own OSS offering as well as how to be a good internal consumer of other such offerings. Come ready to learn and laugh about my journey to offering OSS to thousands of people!
Testing Event Driven Architectures: How to Broker the Complexity | Frank Kilc...HostedbyConfluent
Quality Matters … and as event-driven architectures (EDA) become increasingly popular in the microservices space, ensuring the delivery and performance of your EDA increases in importance. But while it’s powerful architecture, it does come with its challenges, especially from a testing perspective. For example, most organizations are not reliant on Kafka alone, but a multitude of interconnected APIs like REST, GraphQL and gRPC. One of the questions that arise from this challenge: How do you build end-to-end tests when the APIs are completely different technologies—without relying on fragile scripts? In our talk, we’ll tackle this question and many more when it comes to the testing of Apache Kafka endpoints and your services architecture. We’ll cover what makes testing in EDA difficult; technologies that can help you; and how we at SmartBear are thinking about these testing problems and, most importantly, how we are trying to solve for them.
5 lessons learned for successful migration to Confluent cloud | Natan Silinit...HostedbyConfluent
Confluent Cloud makes Devops engineers lives a lot more easier.
Yet moving 1500 microservices, 10K topics and 100K partitions to a multi-cluster Confluent cloud can be a challenge.
In this talk you will hear about 5 lessons that Wix has learned in order to successfully meet this challenge.
These lessons include:
1. Automation, Automation, Automation - all the process has to be completely automated at such scale
2. Prefer a gradual approach - E.g. migrate topics in small chunks and not all at once. Reduces risks if things go bad
3. Cleanup first - avoid migrating unused topics or topics with too many unnecessary partitions
Building Scalable Real-Time Data Pipelines with the Couchbase Kafka Connector...HostedbyConfluent
Many organizations use Apache Kafka to facilitate the flow of data between multiple applications or data sources. Thanks to Kafka’s distributed architecture, it is easy to set up a scalable and reliable broker, but doing the same with producers or consumers is quite often a fine art. This session provides a quick overview of Couchbase, describes the Couchbase Kafka Connector, and showcases a demo of how it can be used as both a source and a sink for building real-time data processing pipelines for mission-critical applications.
Low-latency real-time data processing at giga-scale with Kafka | John DesJard...HostedbyConfluent
Data volumes continue to grow, demanding new, more scalable solutions for low-latency data processing. Previously, the default approach to deploying such systems was to throw a ton of hardware at the problem. However, that is no longer necessary, as newer technologies showcase a level of efficiency that enables smaller, more manageable clusters while handling extreme workloads. Processing billions of events per second on Kafka can now be done with a modest investment in compute resources. In this session, you will learn how to architect and build the fastest data processing applications that scale linearly, and combine streaming data and reference data data-in-motion and data-at-rest with machine learning. We will take you through the end-to-end framework and example application, built on the Hazelcast Platform, an open source software engine designed for ultra-fast performance. We will also show how you can leverage SQL to further explore the operational data in the solution including querying Kafka topics and key-value data on the in-memory data store. Attendees will also get access to the Github sample application shown.
Creating a Kafka Topic. Super easy? | Andrew Stevenson and Marios Andreopoulo...HostedbyConfluent
Making developers productive on Kafka requires giving self-service access. But even something as seemingly straightforward as Topic creation is not so easy, and in some cases can lead to catastrophe.
In this talk, we’ll share and demonstrate different approaches for developers to safely create Kafka Topics whilst sharing a few war stories of what can go wrong along the way.
Kafka Summit SF 2017 - Providing Reliability Guarantees in Kafka at One Trill...confluent
In this presentation, I will talk about my firsthand experience dealing with the unique challenges of running Kafka at a massive scale. If you ever thought that running Kafka is difficult, this talk may change your mind and provide you with valuable insights into how to configure a Kafka cluster efficiently, how to manage Kafka for enterprise customers and how to measure, monitor and maintain the Quality of Kafka Service. Our production Kafka cluster runs over 1500+ VMs, and serves over 10 GBPS data spread across hundreds of topics for multiple teams across Microsoft. We built a self-serve Kafka management service to make the process manageable and scalable across many teams. In this talk, I will also share insights about running Kafka in Private vs multi-tenant mode, supporting failover and disaster recovery requirements, and how to make Kafka Compliant with regulatory certifications such as ISO, SOC, FEDRAMP, etc.
Presented by Nitin Kumar, Microsoft
Systems Track
Developing custom transformation in the Kafka connect to minimize data redund...HostedbyConfluent
Compacted topics grow over time and are often utilizing high performance, low latency and relatively expensive storage solutions. Reducing duplicated data plays a critical role in the size of compacted topics. with less data on the topics, the Kafka cluster consumes less disk space which in turn it leads to lower operation cost.
in this use case-driven talk, we are going to demonstrate how our team at UnitedHealth Group leveraged existing transformers to extract data from the message metadata in the topic as well as how we developed our customized transformers to minimize the amount of duplicated data in each message in the topic.
0-330km/h: Porsche's Data Streaming Journey | Sridhar Mamella, PorscheHostedbyConfluent
The auto industry is midst a data revolution that is transforming how companies do business. Once a scarce resource, data has now become abundant and cheap. What are the new technologies that change the way we produce, collect, process, store, and analyze data. What new streams of data are being created with Industry 4.0 and the Internet of Things on the horizon, is there significant value to taking a strategic approach to Fast Data. How is Porsche building the next level Data Streaming Platform with open source technologies and how we are using CI/CD pipelines amongst others in order to serve our use cases.
How to Discover, Visualize, Catalog, Share and Reuse your Kafka Streams (Jona...HostedbyConfluent
As Kafka deployments grow within your organization, so do the challenges around lifecycle management. For instance, do you really know what streams exist, who is producing and consuming them? What is the effect of upstream changes? How is this information kept up to date, so it is relevant and consistent to others looking to reuse these streams? Ever wish you had a way to view and visualize graphically the relationships between schemas, topics and applications? In this talk we will show you how to do that and get more value from your Kafka Streaming infrastructure using an event portal. It’s like an API portal but specialized for event streams and publish/subscribe patterns. Join us to see how you can automatically discover event streams from your Kafka clusters, import them to a catalog and then leverage code gen capabilities to ease development of new applications.
Serverless Architectures with AWS Lambda and MongoDB Atlas by Sig NarvaezData Con LA
Abstract:- It's easier than ever to power serverless architectures with managed database services like MongoDB Atlas. In this session, we will explore the rise of serverless architectures and how they've rapidly integrated into public and private cloud offerings. We will demonstrate how to build a simple REST API using AWS Lambda functions, create a highly available cluster in MongoDB Atlas, and connect both via VCP Peering. We will then simulate load and use the monitoring and scale features of MongoDB Atlas and use MongoDB Compass to browse our database.
AWS re:Invent 2016: Fireside chat with Groupon, Intuit, and LifeLock on solvi...Amazon Web Services
Redis Labs' CMO is hosting a fireside chat with leaders from multiple industries including Groupon (e-commerce ), Intuit (Finance ), and LifeLock (Identity Protection ). This conversation-style session will cover the Big Data related challenges faced by these leading companies as they scale their applications, ensure high availability, serve the best user experience at lowest latencies, and optimize between cloud and on-premises operations. The introductory level session will appeal to both developer and DevOps functions. They will hear about diverse use cases such as recommendations engine, hybrid transactions and analytics operations, and time-series data analysis. The audience will learn how the Redis in-memory database platform addresses the above use cases with its multi-model capability and in a cost effective manner to meet the needs of the next generation applications. Session sponsored by Redis Labs.
DataOps Automation for a Kafka Streaming Platform (Andrew Stevenson + Spiros ...HostedbyConfluent
DataOps challenges us to build data experiences in a repeatable way. For those with Kafka, this means finding a means of deploying flows in an automated and consistent fashion.
The challenge is to make the deployment of Kafka flows consistent across different technologies and systems: the topics, the schemas, the monitoring rules, the credentials, the connectors, the stream processing apps. And ideally not coupled to a particular infrastructure stack.
In this talk we will discuss the different approaches and benefits/disadvantages to automating the deployment of Kafka flows including Git operators and Kubernetes operators. We will walk through and demo deploying a flow on AWS EKS with MSK and Kafka Connect using GitOps practices: including a stream processing application, S3 connector with credentials held in AWS Secrets Manager.
Server Sent Events using Reactive Kafka and Spring Web flux | Gagan Solur Ven...HostedbyConfluent
Server-Sent Events (SSE) is a server push technology where clients receive automatic server updates through the secure http connection. SSE can be used in apps like live stock updates, that use one way data communications and also helps to replace long polling by maintaining a single connection and keeping a continuous event stream going through it. We used a simple Kafka producer to publish messages onto Kafka topics and developed a reactive Kafka consumer by leveraging Spring Webflux to read data from Kafka topic in non-blocking manner and send data to clients that are registered with Kafka consumer without closing any http connections. This implementation allows us to send data in a fully asynchronous & non-blocking manner and allows us to handle a massive number of concurrent connections. We’ll cover:
•Push data to external or internal apps in near real time
•Push data onto the files and securely copy them to any cloud services
•Handle multiple third-party apps integrations
Nordstrom's Event-Sourced Architecture and Kafka-as-a-Service | Adam Weyant a...HostedbyConfluent
As a 120 year-old company, Nordstrom was facing numerous challenges as a result of an aging, service-oriented, architecture. Developers needing to implement reporting for analytics separately from core functionality resulted in questionable data quality for analytical purposes. Scaling dependent services in harmony to not overwhelm each other was a struggle faced by many, if not most, teams. Several years into a company-wide transition to an event-sourced architecture, Nordstrom has solved these and various other problems. By leveraging the capabilities of Apache Kafka and Confluent, combined with a deep organizational focus on well-defined business event schemas, a singular event can be used for analytical, functional, operational, and model building purposes. This session will describe this architecture and the lessons learned while building it, with a focus on the internally built, multi-tenant, multi-cluster, Kafka-as-a-Service platform that enables it.
Cloud-Based Event Stream Processing Architectures and Patterns with Apache Ka...HostedbyConfluent
The Apache Kafka ecosystem is very rich with components and pieces that make for designing and implementing secure, efficient, fault-tolerant and scalable event stream processing (ESP) systems. Using real-world examples, this talk covers why Apache Kafka is an excellent choice for cloud-native and hybrid architectures, how to go about designing, implementing and maintaining ESP systems, best practices and patterns for migrating to the cloud or hybrid configurations, when to go with PaaS or IaaS, what options are available for running Kafka in cloud or hybrid environments and what you need to build and maintain successful ESP systems that are secure, performant, reliable, highly-available and scalable.
Kafka & InfluxDB: BFFs for Enterprise Data Applications | Russ Savage, Influx...HostedbyConfluent
Modern data processing applications built on Kafka and InfluxDB deliver the performance, reliability, and flexibility that customers need for robust real-time data pipeline solutions. As the saying goes, the pipeline is greater than the sum of its Kafka and InfluxDB parts. In this session, Russ Savage, Director of Product Management at InfluxData will discuss basic concepts of integrating Kafka and InfluxDB while highlighting how companies are creating fault-tolerant, scalable and fast data pipelines with the power of InfluxDB and Kafka.
Kafka Summit SF 2017 - Worldwide Scalable and Resilient Messaging Services wi...confluent
ChatWork is a worldwide communication service, which holds 110k+ of customer organizations. In 2016, we have developed a new scalable infrastructure based on the idea of CQRS and Event Sourcing using Kafka and Kafka Streams combined with Akka and HBase. In this session, we talk about the concept of this architecture and lessons learned in production use cases.
Presented at Kafka Summit SF 2017 by Masaru Dobashi and Shingo Omura
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Data in Motion: Building Stream-Based Architectures with Qlik Replicate & Kaf...HostedbyConfluent
The challenge with today’s “data explosion” is finding the most appropriate answer to the question, “So where do I put my data?” while avoiding the longer-term problem: data warehouses, data lakes, cloud storage, NoSQL databases, … are often the places where “big” data goes to die.
Enter Physics 101, and my corollary to Newton’s First Law of Motion:
Data in motion tends to stay in motion until it comes rest on disk. Similarly, if data is at rest, it will remain at rest until an external “force” puts it in motion again.
Data inevitably comes to rest at some point. Without “external forces”, data often gets lost or becomes stale where it lands. “Modern” architectures tend to involve data pipelines where downstream consumers of data make use of data generated upstream, often with built-for-purpose repositories at each stage. This session will explore how data that has come to rest can be put in motion again; how Kafka can keep it in motion longer; and how pipelined architectures might be created to make use of that data.
Building Scalable Real-Time Data Pipelines with the Couchbase Kafka Connector...HostedbyConfluent
Many organizations use Apache Kafka to facilitate the flow of data between multiple applications or data sources. Thanks to Kafka’s distributed architecture, it is easy to set up a scalable and reliable broker, but doing the same with producers or consumers is quite often a fine art. This session provides a quick overview of Couchbase, describes the Couchbase Kafka Connector, and showcases a demo of how it can be used as both a source and a sink for building real-time data processing pipelines for mission-critical applications.
Low-latency real-time data processing at giga-scale with Kafka | John DesJard...HostedbyConfluent
Data volumes continue to grow, demanding new, more scalable solutions for low-latency data processing. Previously, the default approach to deploying such systems was to throw a ton of hardware at the problem. However, that is no longer necessary, as newer technologies showcase a level of efficiency that enables smaller, more manageable clusters while handling extreme workloads. Processing billions of events per second on Kafka can now be done with a modest investment in compute resources. In this session, you will learn how to architect and build the fastest data processing applications that scale linearly, and combine streaming data and reference data data-in-motion and data-at-rest with machine learning. We will take you through the end-to-end framework and example application, built on the Hazelcast Platform, an open source software engine designed for ultra-fast performance. We will also show how you can leverage SQL to further explore the operational data in the solution including querying Kafka topics and key-value data on the in-memory data store. Attendees will also get access to the Github sample application shown.
Creating a Kafka Topic. Super easy? | Andrew Stevenson and Marios Andreopoulo...HostedbyConfluent
Making developers productive on Kafka requires giving self-service access. But even something as seemingly straightforward as Topic creation is not so easy, and in some cases can lead to catastrophe.
In this talk, we’ll share and demonstrate different approaches for developers to safely create Kafka Topics whilst sharing a few war stories of what can go wrong along the way.
Kafka Summit SF 2017 - Providing Reliability Guarantees in Kafka at One Trill...confluent
In this presentation, I will talk about my firsthand experience dealing with the unique challenges of running Kafka at a massive scale. If you ever thought that running Kafka is difficult, this talk may change your mind and provide you with valuable insights into how to configure a Kafka cluster efficiently, how to manage Kafka for enterprise customers and how to measure, monitor and maintain the Quality of Kafka Service. Our production Kafka cluster runs over 1500+ VMs, and serves over 10 GBPS data spread across hundreds of topics for multiple teams across Microsoft. We built a self-serve Kafka management service to make the process manageable and scalable across many teams. In this talk, I will also share insights about running Kafka in Private vs multi-tenant mode, supporting failover and disaster recovery requirements, and how to make Kafka Compliant with regulatory certifications such as ISO, SOC, FEDRAMP, etc.
Presented by Nitin Kumar, Microsoft
Systems Track
Developing custom transformation in the Kafka connect to minimize data redund...HostedbyConfluent
Compacted topics grow over time and are often utilizing high performance, low latency and relatively expensive storage solutions. Reducing duplicated data plays a critical role in the size of compacted topics. with less data on the topics, the Kafka cluster consumes less disk space which in turn it leads to lower operation cost.
in this use case-driven talk, we are going to demonstrate how our team at UnitedHealth Group leveraged existing transformers to extract data from the message metadata in the topic as well as how we developed our customized transformers to minimize the amount of duplicated data in each message in the topic.
0-330km/h: Porsche's Data Streaming Journey | Sridhar Mamella, PorscheHostedbyConfluent
The auto industry is midst a data revolution that is transforming how companies do business. Once a scarce resource, data has now become abundant and cheap. What are the new technologies that change the way we produce, collect, process, store, and analyze data. What new streams of data are being created with Industry 4.0 and the Internet of Things on the horizon, is there significant value to taking a strategic approach to Fast Data. How is Porsche building the next level Data Streaming Platform with open source technologies and how we are using CI/CD pipelines amongst others in order to serve our use cases.
How to Discover, Visualize, Catalog, Share and Reuse your Kafka Streams (Jona...HostedbyConfluent
As Kafka deployments grow within your organization, so do the challenges around lifecycle management. For instance, do you really know what streams exist, who is producing and consuming them? What is the effect of upstream changes? How is this information kept up to date, so it is relevant and consistent to others looking to reuse these streams? Ever wish you had a way to view and visualize graphically the relationships between schemas, topics and applications? In this talk we will show you how to do that and get more value from your Kafka Streaming infrastructure using an event portal. It’s like an API portal but specialized for event streams and publish/subscribe patterns. Join us to see how you can automatically discover event streams from your Kafka clusters, import them to a catalog and then leverage code gen capabilities to ease development of new applications.
Serverless Architectures with AWS Lambda and MongoDB Atlas by Sig NarvaezData Con LA
Abstract:- It's easier than ever to power serverless architectures with managed database services like MongoDB Atlas. In this session, we will explore the rise of serverless architectures and how they've rapidly integrated into public and private cloud offerings. We will demonstrate how to build a simple REST API using AWS Lambda functions, create a highly available cluster in MongoDB Atlas, and connect both via VCP Peering. We will then simulate load and use the monitoring and scale features of MongoDB Atlas and use MongoDB Compass to browse our database.
AWS re:Invent 2016: Fireside chat with Groupon, Intuit, and LifeLock on solvi...Amazon Web Services
Redis Labs' CMO is hosting a fireside chat with leaders from multiple industries including Groupon (e-commerce ), Intuit (Finance ), and LifeLock (Identity Protection ). This conversation-style session will cover the Big Data related challenges faced by these leading companies as they scale their applications, ensure high availability, serve the best user experience at lowest latencies, and optimize between cloud and on-premises operations. The introductory level session will appeal to both developer and DevOps functions. They will hear about diverse use cases such as recommendations engine, hybrid transactions and analytics operations, and time-series data analysis. The audience will learn how the Redis in-memory database platform addresses the above use cases with its multi-model capability and in a cost effective manner to meet the needs of the next generation applications. Session sponsored by Redis Labs.
DataOps Automation for a Kafka Streaming Platform (Andrew Stevenson + Spiros ...HostedbyConfluent
DataOps challenges us to build data experiences in a repeatable way. For those with Kafka, this means finding a means of deploying flows in an automated and consistent fashion.
The challenge is to make the deployment of Kafka flows consistent across different technologies and systems: the topics, the schemas, the monitoring rules, the credentials, the connectors, the stream processing apps. And ideally not coupled to a particular infrastructure stack.
In this talk we will discuss the different approaches and benefits/disadvantages to automating the deployment of Kafka flows including Git operators and Kubernetes operators. We will walk through and demo deploying a flow on AWS EKS with MSK and Kafka Connect using GitOps practices: including a stream processing application, S3 connector with credentials held in AWS Secrets Manager.
Server Sent Events using Reactive Kafka and Spring Web flux | Gagan Solur Ven...HostedbyConfluent
Server-Sent Events (SSE) is a server push technology where clients receive automatic server updates through the secure http connection. SSE can be used in apps like live stock updates, that use one way data communications and also helps to replace long polling by maintaining a single connection and keeping a continuous event stream going through it. We used a simple Kafka producer to publish messages onto Kafka topics and developed a reactive Kafka consumer by leveraging Spring Webflux to read data from Kafka topic in non-blocking manner and send data to clients that are registered with Kafka consumer without closing any http connections. This implementation allows us to send data in a fully asynchronous & non-blocking manner and allows us to handle a massive number of concurrent connections. We’ll cover:
•Push data to external or internal apps in near real time
•Push data onto the files and securely copy them to any cloud services
•Handle multiple third-party apps integrations
Nordstrom's Event-Sourced Architecture and Kafka-as-a-Service | Adam Weyant a...HostedbyConfluent
As a 120 year-old company, Nordstrom was facing numerous challenges as a result of an aging, service-oriented, architecture. Developers needing to implement reporting for analytics separately from core functionality resulted in questionable data quality for analytical purposes. Scaling dependent services in harmony to not overwhelm each other was a struggle faced by many, if not most, teams. Several years into a company-wide transition to an event-sourced architecture, Nordstrom has solved these and various other problems. By leveraging the capabilities of Apache Kafka and Confluent, combined with a deep organizational focus on well-defined business event schemas, a singular event can be used for analytical, functional, operational, and model building purposes. This session will describe this architecture and the lessons learned while building it, with a focus on the internally built, multi-tenant, multi-cluster, Kafka-as-a-Service platform that enables it.
Cloud-Based Event Stream Processing Architectures and Patterns with Apache Ka...HostedbyConfluent
The Apache Kafka ecosystem is very rich with components and pieces that make for designing and implementing secure, efficient, fault-tolerant and scalable event stream processing (ESP) systems. Using real-world examples, this talk covers why Apache Kafka is an excellent choice for cloud-native and hybrid architectures, how to go about designing, implementing and maintaining ESP systems, best practices and patterns for migrating to the cloud or hybrid configurations, when to go with PaaS or IaaS, what options are available for running Kafka in cloud or hybrid environments and what you need to build and maintain successful ESP systems that are secure, performant, reliable, highly-available and scalable.
Kafka & InfluxDB: BFFs for Enterprise Data Applications | Russ Savage, Influx...HostedbyConfluent
Modern data processing applications built on Kafka and InfluxDB deliver the performance, reliability, and flexibility that customers need for robust real-time data pipeline solutions. As the saying goes, the pipeline is greater than the sum of its Kafka and InfluxDB parts. In this session, Russ Savage, Director of Product Management at InfluxData will discuss basic concepts of integrating Kafka and InfluxDB while highlighting how companies are creating fault-tolerant, scalable and fast data pipelines with the power of InfluxDB and Kafka.
Kafka Summit SF 2017 - Worldwide Scalable and Resilient Messaging Services wi...confluent
ChatWork is a worldwide communication service, which holds 110k+ of customer organizations. In 2016, we have developed a new scalable infrastructure based on the idea of CQRS and Event Sourcing using Kafka and Kafka Streams combined with Akka and HBase. In this session, we talk about the concept of this architecture and lessons learned in production use cases.
Presented at Kafka Summit SF 2017 by Masaru Dobashi and Shingo Omura
Event-driven Applications with Kafka, Micronaut, and AWS Lambda | Dave Klein,...HostedbyConfluent
One of the great things about running applications in the cloud is that you only pay for the resources that you use. But that also makes it more important than ever for our applications to be resource-efficient. This becomes even more critical when we use serverless functions.
Micronaut is an application framework that provides dependency injection, developer productivity features, and excellent support for Apache Kafka. By performing dependency injection, AOP, and other productivity-enhancing magic at compile time, Micronaut allows us to build smaller, more efficient microservices and serverless functions.
In this session, we'll explore the ways that Apache Kafka and Micronaut work together to enable us to build fast, efficient, event-driven applications. Then we'll see it in action, using the AWS Lambda Sink Connector for Confluent Cloud.
Data in Motion: Building Stream-Based Architectures with Qlik Replicate & Kaf...HostedbyConfluent
The challenge with today’s “data explosion” is finding the most appropriate answer to the question, “So where do I put my data?” while avoiding the longer-term problem: data warehouses, data lakes, cloud storage, NoSQL databases, … are often the places where “big” data goes to die.
Enter Physics 101, and my corollary to Newton’s First Law of Motion:
Data in motion tends to stay in motion until it comes rest on disk. Similarly, if data is at rest, it will remain at rest until an external “force” puts it in motion again.
Data inevitably comes to rest at some point. Without “external forces”, data often gets lost or becomes stale where it lands. “Modern” architectures tend to involve data pipelines where downstream consumers of data make use of data generated upstream, often with built-for-purpose repositories at each stage. This session will explore how data that has come to rest can be put in motion again; how Kafka can keep it in motion longer; and how pipelined architectures might be created to make use of that data.
Pitch Deck Educash - Equity Crowdfunding - Oferta Publica de CICS C
Pitch Deck contendo informações sobre a oferta de investimento na Educar 3.0, empresa que desenvolveu o game educativo Educash, ferramenta de aprendizado de educação financeira utilizando a gameficação como mote para prender a atenção do jogador. Detalhes da oferta disponíveis em www.startmeup.com.br
About Tekmonks. Products and services we offer.
Founded to address the disparity of software access between small businesses and conglomerates, TekMonks’ vision has been to provide superior software to any size organization. Our focus and dedication (and love of AI) has since been poured into developing software in every category from enterprise integration, robotic process automation, AIOPs, unparalleled cybersecurity & even a plug and play chatbot. This unwavering passion for development has led to the creation of the amazing products of tomorrow, today.
Connect with the Pioneer in the IT Staffing industry - Avantha Business Solut...avanthabsl
Its important to build a workforce staffing strategy so that you hire the right people with the right skills sets. Facing problems to hire exceptional IT talent ?
With mandating cloud initiatives, many application services and systems are moving to cloud. In this session, we will learn how Integration Cloud helps us integrate services with cloud based applications without writing any code. Simply connect and configure your integrations right in the graphical browser UI.
View to understand cloud strategy to build integrations and see interesting demo’s on
- Hybrid Integration
- Cloud-to-cloud integration
www.kelltontech.com
Emtec hosted this webinar, which will provide insight into why organizations currently using Hyperion Enterprise need to start the process of assessing migration to Hyperion Financial Management (HFM). This presentation will assist you in understanding why it is important to consider migrating and the value of moving to Hyperion Financial Management.
Marlabs Capabilities Overview: Microsoft Dynamics Marlabs
Marlabs has extensive customization and implementation expertise with Dynamics CRM. Our services include Microsoft Dynamics CRM evaluation, implementation, development and migration.
Similar to Optimized Solutions - Corporate Overview (20)
1. Strategic Sourcing
IT Optimization Innovation
Business AnalyticsTechnology Adoption
Business
Business
Transformation
IT Innovation
Solutions &
Service Outsourcing
Innovation
Partnership
Value…
Multiplied
1
2. Highlights
•Founded in 1997
•Global presence
•Strong relationships with Government
Entities
•Centres of Excellence – Enterprise
Solutions
•Strategic growth through Organic and
Inorganic Channels
2
3. Journey
CREATE VISION, BUILD SCALE, GROW RAPIDLY
1980
Hyderabad
Power
Electrical
Engineering
Corporation
1994
IT Journey
(Cybertech
)
1997
Established
Cyberworld
Solutions
1999
CWS
Global
presence
2007
Cyberworld
/ Prosoft
Merger
2010
Established
Optimized
Solutions
• Pubsec customers
• 300 Engineers
• Chicago Based
• IT Services
• Consulting Expertise
• 10 Employees
• Contingent Staffing
• SAP ERP
• $1.5 Mil
• Cyberworld – 40 Employees
• SAP ERP
• $ 4 Mil
• BI, Engineering services
• ANZ, APAC, MEA
• CWS – 200 Emp; $14 Mil
• Prosoft – 100 Emp; $17 Mil
• USA, Middle East, India
3
5. Business Drivers
To be globally respected strategic partner for the customer
and strong employer fostering innovation, creativity and passion
for success, leveraging technology delivered by best expertise.
Vision
Mission
To be a full service provider driving success for our customers, care
for our employees and to serve the community we operate in.
Values
• Innovative Solutions
• Intensity to Succeed
• Building Trust
• Integrity in all we do
5
7. Value Proposition
Flexible Relationship
Model
Partnership approach
Responsiveness
Transparency
Business Advantage
Business responsiveness
On-site support worldwide
Alliance Partnerships with
leading
Technology providers
Optimal cost at committed
quality
A Global Service Delivery Model
Globally talent with local presence
Efficiency oriented delivery
framework
7
8. Services
• IT Consulting
• Transition & Transformation Services
• Comprehensive End-to-End IT Staffing
Services
• Onsite, Near-Shore or Off-Shore
capabilities
• Resources are selected based on
client’s industry, technology knowledge
and the customization level needed for
the project
• Comprehensive End-to-End IT Staffing
Services
Professional Services
Enterprise Solutions & Services
• Network Service Management
• Proactive monitoring of corporate
network
• Online monitoring of servers and
remote trouble shooting
• Incident handling and reporting
system
• Problem escalation and
avoidance by predictive analytics
• Bandwidth management
• Network security services
• Apache Spark based predictive
analytical systems
Infrastructure Management
Services
• Business Enterprise Solutions
• Cloud Implementation & Integration
• Business Intelligence & Analytics
• Enterprise UX & Mobility Solutions
• Application Integration
8
9. Business Enterprise
Solutions
• SAP Implementation Services
• SAP Phase 2 Services
• Hybris Services
• Maintenance & Support
• Application Lifecycle Management
9
11. Business Intelligence
& Analytics
• Consulting & Implementation services –
SAP HANA, Big Data, Apache Hadoop
• Business Intelligence and Data Warehouse –
• SAP BW / Business Objects
• Predictive Analytics
• Governance, Risk and Compliance (GRC)
• Reporting
11
12. Enterprise UX &
Mobility Solutions
•Cross platform capabilities (SAP Mobile, Android,
iOS, Windows)
•Developing Native Solutions
•Unified and cross-platform applications.
•Integration capabilities of Enterprise Web
Applications with Handheld devices
•Applications based on location based services
•Social Networking for Mobile
•Mobile commerce
•Mobile security
•Bar codes
•Verification and Testing
12
14. Infrastructure
Management
Services
• Network Service Management
• Proactive monitoring of corporate
network
• Online monitoring of servers and
remote trouble shooting
• Incident handling and reporting system
• Problem escalation and avoidance by
predictive analytics
• Bandwidth management
• Network security services
• Apache Spark based predictive
analytical systems
14