Life doesn't happen in batches. You have to process data on time as it happens to make use of the time-value of information.
Apart from a general introduction to stream processing and Apache Flink, this presentation shows how we as a successful data-driven gaming company apply this concept with our data architecture and how we use Apache Flink to run several streaming applications.
Learn what the Movement is all about with this Meevo 101 class! You will learn the key features of Meevo, and how to navigate the software quickly and easily.
Learn how B2B outreach can be transformed with new technologies.
This talk will include how intent and engagement data can be collated on individuals and companies, how personalisation can be used to tailor the buying journey from cold outreach, and how technology can be used to improve sales communication.
Hear how tracking prospect activity and intent can improve sales performance and cut costs. Hear how in the not too distant future every visitor coming from a trackable source will see personalised content and discover the impact that accelerated research will have on sales performance.
Let's Play Flink – Fun with Streaming in a Gaming CompanyDataWorks Summit
Chocolate, ice cream and games are perhaps 3 of the most popular universally understood words that can bring joy to anyone between 5-60 years of age!
InnoGames is one of the world's leading developers and providers of online games and at InnoGames we not only have all three of those things but in addition we build up a powerful data infrastructure because it's expensive to run your business blind. And being able to evaluate key performance indicators fast to make good decisions and deliver personalized and relevant content to each and every gamer is essential to be successful and it is how a customer becomes a fan.
Our data infrastructure mainly consists of a data pipeline that covers the streaming part and a data platform to perform batch processing. The latter is based on the Hadoop ecosystem using technologies such as Hive, Spark, Hue, R and more to give our data scientists a high flexibility. There were several evolutions of the data pipeline, starting with Kestrel and custom streaming applications. Later on we switched the base technologies to Apache Kafka and Apache Storm. Last year we recreated our streaming infrastructure based on Apache Flink which is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications.
Because having fun is the best way to learn, after a quick introduction to Flink and the Flink ecosystem this talk will focus on real-world use cases and transports the idea of those projects to live examples. This way, the audience will be part of a Flink based experiment to internalize the experience we gained with Flink.
El día 21 de Septiembre, tuvimos el placer de acoger en nuestras oficinas un Meetup impartido por nuestro compañero Paco Guerrero sobre la plataforma Apache Flink.
"Apache Flink es una plataforma open source de procesamiento en tiempo real, que está en auge al ofrecer características de las que otras tecnologías con las que compite no disponen, sin impacto en su rendimiento. En esta formación introduciremos la filosofía y motor de procesamiento que hace a Flink tan especial y potente. También recorreremos los pilares básicos que confirman a Flink como la plataforma de streaming más prometedora actualmente"
Riot Games Scalable Data Warehouse Lecture at UCSB / UCLAsean_seannery
This is a talk that was given for the Scalable Internet Services Masters-level Computer Science class at UCLA and UCSB. It briefly discusses the server architecture for the game League of Legends before going into depth about how the data warehouse can hold petabytes of player data. Discussion about message queue architecture and scalability occurs along the way
The Fine Art of Time Travelling - Implementing Event Sourcing - Andrea Saltar...ITCamp
If there is a common practice in architecting software systems, it is to have them store the last known state of business entities in a relational database: though widely adopted and effectively supported by existing development tools, this practice trades the easiness of implementation with the cost of losing the history of such entities.
Event Sourcing provides a pivotal solution to this problem, giving systems the capability of restoring the state they had at any given point in time. Furthermore, injecting mock-up events and having them replayed by the business logic allows for an easy implementation of simulations and “what if” scenarios.
In this session, Andrea will demonstrate how to design time travelling systems by examining real-world, production-tested solutions.
Learn what the Movement is all about with this Meevo 101 class! You will learn the key features of Meevo, and how to navigate the software quickly and easily.
Learn how B2B outreach can be transformed with new technologies.
This talk will include how intent and engagement data can be collated on individuals and companies, how personalisation can be used to tailor the buying journey from cold outreach, and how technology can be used to improve sales communication.
Hear how tracking prospect activity and intent can improve sales performance and cut costs. Hear how in the not too distant future every visitor coming from a trackable source will see personalised content and discover the impact that accelerated research will have on sales performance.
Let's Play Flink – Fun with Streaming in a Gaming CompanyDataWorks Summit
Chocolate, ice cream and games are perhaps 3 of the most popular universally understood words that can bring joy to anyone between 5-60 years of age!
InnoGames is one of the world's leading developers and providers of online games and at InnoGames we not only have all three of those things but in addition we build up a powerful data infrastructure because it's expensive to run your business blind. And being able to evaluate key performance indicators fast to make good decisions and deliver personalized and relevant content to each and every gamer is essential to be successful and it is how a customer becomes a fan.
Our data infrastructure mainly consists of a data pipeline that covers the streaming part and a data platform to perform batch processing. The latter is based on the Hadoop ecosystem using technologies such as Hive, Spark, Hue, R and more to give our data scientists a high flexibility. There were several evolutions of the data pipeline, starting with Kestrel and custom streaming applications. Later on we switched the base technologies to Apache Kafka and Apache Storm. Last year we recreated our streaming infrastructure based on Apache Flink which is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications.
Because having fun is the best way to learn, after a quick introduction to Flink and the Flink ecosystem this talk will focus on real-world use cases and transports the idea of those projects to live examples. This way, the audience will be part of a Flink based experiment to internalize the experience we gained with Flink.
El día 21 de Septiembre, tuvimos el placer de acoger en nuestras oficinas un Meetup impartido por nuestro compañero Paco Guerrero sobre la plataforma Apache Flink.
"Apache Flink es una plataforma open source de procesamiento en tiempo real, que está en auge al ofrecer características de las que otras tecnologías con las que compite no disponen, sin impacto en su rendimiento. En esta formación introduciremos la filosofía y motor de procesamiento que hace a Flink tan especial y potente. También recorreremos los pilares básicos que confirman a Flink como la plataforma de streaming más prometedora actualmente"
Riot Games Scalable Data Warehouse Lecture at UCSB / UCLAsean_seannery
This is a talk that was given for the Scalable Internet Services Masters-level Computer Science class at UCLA and UCSB. It briefly discusses the server architecture for the game League of Legends before going into depth about how the data warehouse can hold petabytes of player data. Discussion about message queue architecture and scalability occurs along the way
The Fine Art of Time Travelling - Implementing Event Sourcing - Andrea Saltar...ITCamp
If there is a common practice in architecting software systems, it is to have them store the last known state of business entities in a relational database: though widely adopted and effectively supported by existing development tools, this practice trades the easiness of implementation with the cost of losing the history of such entities.
Event Sourcing provides a pivotal solution to this problem, giving systems the capability of restoring the state they had at any given point in time. Furthermore, injecting mock-up events and having them replayed by the business logic allows for an easy implementation of simulations and “what if” scenarios.
In this session, Andrea will demonstrate how to design time travelling systems by examining real-world, production-tested solutions.
Fast Data: A Customer’s Journey to Delivering a Compelling Real-Time SolutionGuido Schmutz
This is my part of the Open World 2014 presentation on Fast Data and Oracle Event Processing (OEP) 12c.
It contains an architecture discussion with some architecture patterns of where Events are useful. The 2nd part is a demo showcase showing OEP12c and BAM12c in action, analyzing the live OOW2014 twitter feed.
Why and how to engage a Complex Event Processor from a Java Web ApplicationLucas Jellema
Complex Event Processors are capable of handling large volumes of events - by filtering, aggregating or detecting patterns. Java Applications use a CEP to pre-process incoming signals. These applications can also generate the events - for example the user click and navigation behavior in the web application - and report them to the CEP. The web application can subsequently utilize the outcomes from the CEP to for example intelligently guide the user or present relevant details. This session will show various ways in which a CQL based CEP can be integrated into a Java application to enhance the web application's behavior.
The intended audience for this presentation consists of experience Web Java and Enterprise Java developers.
- introduction to Complex Event Processing
- demonstration of CQL event processing on events arriving on JMS
- discussion of how the web application can absorb the CEP results
- demonstration of a simple Web Shop application that published events to the CEP and utilizes the CEP results
- discussion on when and how CEP can add value to Java applications
2016 09 measurecamp - event data modelingyalisassoon
Presentation by Christophe Bogaert to Measurecamp London September 2016. Christophe discussed what makes consuming and analysing event-streams difficult, and outlined a number of techniques for overcoming those obstacles.
Expedia Affiliates Network (EAN), is the B2B partnership brand of Expedia, Inc. Our technology powers the hotel offering of thousands of partners around the world. Everyday EAN produces, processes and consumes more than 10TB of data to drive applications, services and business through granular insights. EAN new data cloud platform is an opportunity to be nimble, simplify and consolidate the authoring, schedule, execution and monitoring of complex data workflows. The talk explores EAN’s strategy on data processing in the cloud, moving from traditional workflow tools to a serverless approach, leveraging step-function, lambda, API gateway, EMR and Cloud Watch.
Streaming Analytics for Financial EnterprisesDatabricks
Streaming Analytics (or Fast Data processing) is becoming an increasingly popular subject in the financial sector. There are two main reasons for this development. First, more and more data has to be analyze in real-time to prevent fraud; all transactions that are being processed by banks have to pass and ever-growing number of tests to make sure that the money is coming from and going to legitimate sources. Second, customers want to have friction-less mobile experiences while managing their money, such as immediate notifications and personal advise based on their online behavior and other users’ actions.
A typical streaming analytics solution follows a ‘pipes and filters’ pattern that consists of three main steps: detecting patterns on raw event data (Complex Event Processing), evaluating the outcomes with the aid of business rules and machine learning algorithms, and deciding on the next action. At the core of this architecture is the execution of predictive models that operate on enormous amounts of never-ending data streams.
In this talk, I’ll present an architecture for streaming analytics solutions that covers many use cases that follow this pattern: actionable insights, fraud detection, log parsing, traffic analysis, factory data, the IoT, and others. I’ll go through a few architecture challenges that will arise when dealing with streaming data, such as latency issues, event time vs server time, and exactly-once processing. The solution is build on the KISSS stack: Kafka, Ignite, and Spark Structured Streaming. The solution is open source and available on GitHub.
Presentation to the Silverlight User Group in London on October 12th to provide a round-up of the recent BUILD conference in LA and an introduction to Windows 8 and the Windows Runtime.
Presentación realizada el 3 de Julio en la que se presentaron los plugins de Latch para OS X, Latch para Windows [Personal/Enterprise] Edition y Latch para Linux. Los plugins están disponibles en: https://latch.elevenpaths.com/www/plugins_sdks.html
Webinar - Order out of Chaos: Avoiding the Migration MigrainePeak Hosting
When your business has outgrown your current managed hosting provider, the logical thing is to search for something better. Change can be difficult and chaotic, but it doesn’t have to be.
This webinar focuses on best practices for making your migration from the cloud as pain free as possible, including a discussion on what you need to know and ask of your migration provider to ensure it goes smoothly. As an example of this, we will outline Peak Hosting’s migration process, as well as discuss one of our customer migrations and why they chose to undertake it.
EDA Meets Data Engineering – What's the Big Deal?confluent
Presenter: Guru Sattanathan, Systems Engineer, Confluent
Event-driven architectures have been around for many years, much like Apache Kafka®, which first open sourced in 2011. The reality is that the true potential of Kafka is only being realised now. Kafka is becoming the central nervous system of many of today’s enterprises. It is bringing a profound paradigm shift to the way we think about enterprise IT. What has changed in Kafka to enable this paradigm shift? Is it not just a message broker, and how are enterprises using it today? This session will explore these key questions.
Sydney: https://content.deloitte.com.au/20200221-tel-event-tech-community-syd-registration
Melbourne: https://content.deloitte.com.au/20200221-tel-event-tech-community-mel-registration
Overcome your fear of implementing offline mode in your appsMarin Todorov
Way too many apps on the App Store totally break when you loose connectivity. Have a look at some study cases and hopefully by the end you will figure out that implementing offline mode in your app is not that difficult at all.
Big Data at Riot Games – Using Hadoop to Understand Player Experience - Stamp...StampedeCon
At the StampedeCon 2013 Big Data conference in St. Louis, Riot Games discussed Using Hadoop to Understand and Improve Player Experience. Riot Games aims to be the most player-focused game company in the world. To fulfill that mission, it’s vital we develop a deep, detailed understanding of players’ experiences. This is particularly challenging since our debut title, League of Legends, is one of the most played video games in the world, with more than 32 million active monthly players across the globe. In this presentation, we’ll discuss several use cases where we sought to understand and improve the player experience, the challenges we faced to solve those use cases, and the big data infrastructure that supports our capability to provide continued insight.
From Prototype to Production: How to take the leap in IoT... and stick the landing
A field-tested, production-ready IoT prototype is both an enormous milestone and the beginning of a brand new challenge, one that requires new skills, new tools, new partners, and a keen eye for both danger and opportunity. As CTO of cloud-connectivity pioneer Soracom, Kenta Yasukawa has helped customers around the world manage the tricky transition from prototype to production. This session will examine real-world use cases across industries to to show how to achieve success at scale. From managing certificates in Shenzhen to capping connectivity cost in California, today's cloud offers more opportunities than ever to break through the hardware, software and connectivity dependencies unique to IoT.
A beginners introduction to why Infrastructure as Code (IaC), why VMs and why containers.
Builds upon on Cattles and Pets analogy to drive home how containers make it possible to do IaC and in turn build resilient services
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
More Related Content
Similar to Squirrels and Elephants - The InnoGames Big Data and Streaming Infrastructure
Fast Data: A Customer’s Journey to Delivering a Compelling Real-Time SolutionGuido Schmutz
This is my part of the Open World 2014 presentation on Fast Data and Oracle Event Processing (OEP) 12c.
It contains an architecture discussion with some architecture patterns of where Events are useful. The 2nd part is a demo showcase showing OEP12c and BAM12c in action, analyzing the live OOW2014 twitter feed.
Why and how to engage a Complex Event Processor from a Java Web ApplicationLucas Jellema
Complex Event Processors are capable of handling large volumes of events - by filtering, aggregating or detecting patterns. Java Applications use a CEP to pre-process incoming signals. These applications can also generate the events - for example the user click and navigation behavior in the web application - and report them to the CEP. The web application can subsequently utilize the outcomes from the CEP to for example intelligently guide the user or present relevant details. This session will show various ways in which a CQL based CEP can be integrated into a Java application to enhance the web application's behavior.
The intended audience for this presentation consists of experience Web Java and Enterprise Java developers.
- introduction to Complex Event Processing
- demonstration of CQL event processing on events arriving on JMS
- discussion of how the web application can absorb the CEP results
- demonstration of a simple Web Shop application that published events to the CEP and utilizes the CEP results
- discussion on when and how CEP can add value to Java applications
2016 09 measurecamp - event data modelingyalisassoon
Presentation by Christophe Bogaert to Measurecamp London September 2016. Christophe discussed what makes consuming and analysing event-streams difficult, and outlined a number of techniques for overcoming those obstacles.
Expedia Affiliates Network (EAN), is the B2B partnership brand of Expedia, Inc. Our technology powers the hotel offering of thousands of partners around the world. Everyday EAN produces, processes and consumes more than 10TB of data to drive applications, services and business through granular insights. EAN new data cloud platform is an opportunity to be nimble, simplify and consolidate the authoring, schedule, execution and monitoring of complex data workflows. The talk explores EAN’s strategy on data processing in the cloud, moving from traditional workflow tools to a serverless approach, leveraging step-function, lambda, API gateway, EMR and Cloud Watch.
Streaming Analytics for Financial EnterprisesDatabricks
Streaming Analytics (or Fast Data processing) is becoming an increasingly popular subject in the financial sector. There are two main reasons for this development. First, more and more data has to be analyze in real-time to prevent fraud; all transactions that are being processed by banks have to pass and ever-growing number of tests to make sure that the money is coming from and going to legitimate sources. Second, customers want to have friction-less mobile experiences while managing their money, such as immediate notifications and personal advise based on their online behavior and other users’ actions.
A typical streaming analytics solution follows a ‘pipes and filters’ pattern that consists of three main steps: detecting patterns on raw event data (Complex Event Processing), evaluating the outcomes with the aid of business rules and machine learning algorithms, and deciding on the next action. At the core of this architecture is the execution of predictive models that operate on enormous amounts of never-ending data streams.
In this talk, I’ll present an architecture for streaming analytics solutions that covers many use cases that follow this pattern: actionable insights, fraud detection, log parsing, traffic analysis, factory data, the IoT, and others. I’ll go through a few architecture challenges that will arise when dealing with streaming data, such as latency issues, event time vs server time, and exactly-once processing. The solution is build on the KISSS stack: Kafka, Ignite, and Spark Structured Streaming. The solution is open source and available on GitHub.
Presentation to the Silverlight User Group in London on October 12th to provide a round-up of the recent BUILD conference in LA and an introduction to Windows 8 and the Windows Runtime.
Presentación realizada el 3 de Julio en la que se presentaron los plugins de Latch para OS X, Latch para Windows [Personal/Enterprise] Edition y Latch para Linux. Los plugins están disponibles en: https://latch.elevenpaths.com/www/plugins_sdks.html
Webinar - Order out of Chaos: Avoiding the Migration MigrainePeak Hosting
When your business has outgrown your current managed hosting provider, the logical thing is to search for something better. Change can be difficult and chaotic, but it doesn’t have to be.
This webinar focuses on best practices for making your migration from the cloud as pain free as possible, including a discussion on what you need to know and ask of your migration provider to ensure it goes smoothly. As an example of this, we will outline Peak Hosting’s migration process, as well as discuss one of our customer migrations and why they chose to undertake it.
EDA Meets Data Engineering – What's the Big Deal?confluent
Presenter: Guru Sattanathan, Systems Engineer, Confluent
Event-driven architectures have been around for many years, much like Apache Kafka®, which first open sourced in 2011. The reality is that the true potential of Kafka is only being realised now. Kafka is becoming the central nervous system of many of today’s enterprises. It is bringing a profound paradigm shift to the way we think about enterprise IT. What has changed in Kafka to enable this paradigm shift? Is it not just a message broker, and how are enterprises using it today? This session will explore these key questions.
Sydney: https://content.deloitte.com.au/20200221-tel-event-tech-community-syd-registration
Melbourne: https://content.deloitte.com.au/20200221-tel-event-tech-community-mel-registration
Overcome your fear of implementing offline mode in your appsMarin Todorov
Way too many apps on the App Store totally break when you loose connectivity. Have a look at some study cases and hopefully by the end you will figure out that implementing offline mode in your app is not that difficult at all.
Big Data at Riot Games – Using Hadoop to Understand Player Experience - Stamp...StampedeCon
At the StampedeCon 2013 Big Data conference in St. Louis, Riot Games discussed Using Hadoop to Understand and Improve Player Experience. Riot Games aims to be the most player-focused game company in the world. To fulfill that mission, it’s vital we develop a deep, detailed understanding of players’ experiences. This is particularly challenging since our debut title, League of Legends, is one of the most played video games in the world, with more than 32 million active monthly players across the globe. In this presentation, we’ll discuss several use cases where we sought to understand and improve the player experience, the challenges we faced to solve those use cases, and the big data infrastructure that supports our capability to provide continued insight.
From Prototype to Production: How to take the leap in IoT... and stick the landing
A field-tested, production-ready IoT prototype is both an enormous milestone and the beginning of a brand new challenge, one that requires new skills, new tools, new partners, and a keen eye for both danger and opportunity. As CTO of cloud-connectivity pioneer Soracom, Kenta Yasukawa has helped customers around the world manage the tricky transition from prototype to production. This session will examine real-world use cases across industries to to show how to achieve success at scale. From managing certificates in Shenzhen to capping connectivity cost in California, today's cloud offers more opportunities than ever to break through the hardware, software and connectivity dependencies unique to IoT.
A beginners introduction to why Infrastructure as Code (IaC), why VMs and why containers.
Builds upon on Cattles and Pets analogy to drive home how containers make it possible to do IaC and in turn build resilient services
Similar to Squirrels and Elephants - The InnoGames Big Data and Streaming Infrastructure (20)
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
18. SIMILARITIES
THE FIRST IMPRESSION COUNTS
The moment the customer enters
the shop or the player plays his first
session is crucial
HALO EFFECT
When one trait of a person or
thing is used to make an overall
judgment of that person or
thing
19. IN ORDER TO MAKE A
POSITIVE IMPACT
A RESPONSE NEEDS TO HAPPEN
QUICKLY
32. EVERYTHING IS A STREAM
UNBOUNDED STREAMS
BOUNDED STREAMS
AKA BATCH PROCESSING
33. TIME IN STREAMING
EPISODE I EPISODE II EPISODE III EPISODE IV EPISODE V EPISODE VI EPISODE VII EPISODE VIII EPISODE IX
1999 2002 2005 1977 1980 1983 2015 2017 2019
The
Phantom
Menace
Attack of
the Clones
Revenge of
the Sith
A New
Hope
The Empire
Strikes Back
Return of
the Jedi
The Force
Awakens
The Last
Jedi
?
ORDERED BY EVENT TIME
PROCESSING TIME
34. TIME IN STREAMING
EPISODE I EPISODE II EPISODE IIIEPISODE IV EPISODE V EPISODE VI EPISODE VII EPISODE VIII EPISODE IX
1999 2002 20051977 1980 1983 2015 2017 2019
The
Phantom
Menace
Attack of
the Clones
Revenge of
the Sith
A New
Hope
The Empire
Strikes Back
Return of
the Jedi
The Force
Awakens
The Last
Jedi
?
EVENT TIME
ORDERED BY PROCESSING TIME
39. BUILDING BLOCKS
SQL / TABLE API
DataStream API
ProcessFunction
APIs
(dynamic tables)
(streams, windows)
(events, state, time)
HIGH LEVEL
ANALYTICS API
STREAM AND BATCH
DATA PROCESSING
STATEFUL EVENT-
DRIVEN APPLICATIONS
CONCISENESS
EXPRESSIVENESS
41. LET‘S HAVE A CLOSER LOOK
final StreamExecutionEnvironment env = getExecutionEnvironment();
final DataStreamSource<Integer> stream = env.fromElements(1, 2, 3, 4);
stream
.map((MapFunction<Integer, Integer>) i -> i + 2)
.filter((FilterFunction<Integer>) i -> i % 2 == 0)
.print();
env.execute();
DATA SOURCE
TRANSFORMATION
DATA SINK
64. USE CASE NTCRM
EVENT BUS
EVENT
CLIENT
EVENTGATEWAY
PLAYER DATANTCRM
React to events with interstitials in less than 10 seconds
65. USE CASE NTCRM
Elvenar has a trading feature that sometimes
causes confusion. With NTCRM we can react to
this and show more details within interstitials
exactly when the player needs it.
66. JUST DO IT
DEMO TIME
Check it out on Github: https://github.com/prenomenon/codetalks-flinkdemo
67. GET IN TOUCH
InnoGames GmbH
Friesenstrasse 13
20097 Hamburg
https://www.innogames.com
Volker Janz
Senior Software Developer
Corporate Systems - Analytics
73. BACKUP / DETAILS
The following slides are not part of my talk but
might give the reader more insights later
74. COMPANY SNAPSHOT
More than
400 employees
Founded 2007
in Germany
Headquarter in
Hamburg
+160m EUR revenue
made in 2017
7 live games
>30 language versions
75. I AM LEGEND
OUR PORTFOLIO Simulation Strategy RPG Browser Multi-device Mobile
81. RUNTIME
SOURCE MAP
PRINT
FILTER
OPERATOR CHAIN OPERATOR
OPERATOR
SUBTASK SUBTASK
TASKSOURCE MAP FILTER
OPERATOR CHAIN OPERATOR
SUBTASK SUBTASK
STREAM
PARTITIONS
STREAMING DATAFLOW
(PARALLELIZED VIEW)
A Flink cluster has a JOB MANAGER and multiple
TASK MANAGERS. Each of those is a JVM.
84. STATE
OPERATOR STATE KEYED STATE
Bound only to
an operator
Bound to an
operator and key
PLUGGABLE BACKEND
MULTIPLE PRIMITIVES SUPPORTED
GUARANTEED CONSISTENCY IN CASE OF A FAILURE