At Hootsuite, we've been transitioning from a single monolithic PHP application to a set of scalable Scala-based microservices. To avoid excessive coupling between services, we've implemented an event system using Apache Kafka that allows events to be reliably produced + consumed asynchronously from services as well as data stores.
In this presentation, I talk about:
- Why we chose Kafka
- How we set up our Kafka clusters to be scalable, highly available, and multi-data-center aware.
- How we produce + consume events
- How we ensure that events can be understood by all parts of our system (Some that are implemented in other programming languages like PHP and Python) and how we handle evolving event payload data.
RabbitMQ vs Kafka
Messaging is at the core of many architectures and two giants in the messaging space are RabbitMQ and Apache Kafka. In this webinar we'll take a look at RabbitMQ and Kafka within the context of real-time event-driven architectures.
In this session we’re joined by guest speaker Jack Vanlightly who will explore what RabbitMQ and Apache Kafka are and their approach to messaging. Each technology has made very different decisions regarding every aspect of their design, each with strengths and weaknesses, enabling different architectural patterns.
WEBINAR LIVE DATE: Wednesday 23 May 2018 | 17:30 CEST / 16:30 BST / 11:30 EDT / 08:30 PDT
Link to video: https://www.youtube.com/watch?v=sjDnqrnnYNM
———————————————————————
SPEAKER CONTACT DETAILS
JACK VANLIGHTLY - Jack Vanlightly is a software architect based in Barcelona specialising in event-driven architectures, data processing pipelines and data stores both relational and non-relational.
Twitter: https://twitter.com/vanlightly
———————————————————————
COMPANY CONTACT DETAILS
ERLANG SOLUTIONS
- Website: https://www.erlang-solutions.com
- Twitter: https://www.twitter.com/ErlangSolutions
- LinkedIn: http://www.linkedin.com/company/erlan…
- GitHub: https://github.com/esl
Introducing Apache Kafka - a visual overview. Presented at the Canberra Big Data Meetup 7 February 2019. We build a Kafka "postal service" to explain the main Kafka concepts, and explain how consumers receive different messages depending on whether there's a key or not.
RabbitMQ vs Kafka
Messaging is at the core of many architectures and two giants in the messaging space are RabbitMQ and Apache Kafka. In this webinar we'll take a look at RabbitMQ and Kafka within the context of real-time event-driven architectures.
In this session we’re joined by guest speaker Jack Vanlightly who will explore what RabbitMQ and Apache Kafka are and their approach to messaging. Each technology has made very different decisions regarding every aspect of their design, each with strengths and weaknesses, enabling different architectural patterns.
WEBINAR LIVE DATE: Wednesday 23 May 2018 | 17:30 CEST / 16:30 BST / 11:30 EDT / 08:30 PDT
Link to video: https://www.youtube.com/watch?v=sjDnqrnnYNM
———————————————————————
SPEAKER CONTACT DETAILS
JACK VANLIGHTLY - Jack Vanlightly is a software architect based in Barcelona specialising in event-driven architectures, data processing pipelines and data stores both relational and non-relational.
Twitter: https://twitter.com/vanlightly
———————————————————————
COMPANY CONTACT DETAILS
ERLANG SOLUTIONS
- Website: https://www.erlang-solutions.com
- Twitter: https://www.twitter.com/ErlangSolutions
- LinkedIn: http://www.linkedin.com/company/erlan…
- GitHub: https://github.com/esl
Introducing Apache Kafka - a visual overview. Presented at the Canberra Big Data Meetup 7 February 2019. We build a Kafka "postal service" to explain the main Kafka concepts, and explain how consumers receive different messages depending on whether there's a key or not.
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Apache Kafka vs RabbitMQ: Fit For Purpose / Decision TreeSlim Baltagi
Kafka as a streaming data platform is becoming the successor to traditional messaging systems such as RabbitMQ. Nevertheless, there are still some use cases where they could be a good fit. This one single slide tries to answer in a concise and unbiased way where to use Apache Kafka and where to use RabbitMQ. Your comments and feedback are much appreciated.
Real-Life Use Cases & Architectures for Event Streaming with Apache KafkaKai Wähner
Streaming all over the World: Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka.
Learn about various case studies for event streaming with Apache Kafka across industries. The talk explores architectures for real-world deployments from Audi, BMW, Disney, Generali, Paypal, Tesla, Unity, Walmart, William Hill, and more. Use cases include fraud detection, mainframe offloading, predictive maintenance, cybersecurity, edge computing, track&trace, live betting, and much more.
Join guest speaker Jack Vanlightly in Part II of this series of three webinars where we explore what RabbitMQ and Apache Kafka are and their approach to messaging.
Each technology has made very different decisions regarding every aspect of their design, each with strengths and weaknesses, enabling different architectural patterns.
WEBINAR LIVE DATE: Thursday 30 August 2018 | 17:30 CEST / 16:30 BST / 11:30 EDT / 08:30 PDT
———————————————————————
SPEAKER CONTACT DETAILS
JACK VANLIGHTLY - Jack Vanlightly is a cloud software architect based in Barcelona specialising in event-driven architectures, data processing pipelines and data stores both relational and non-relational.
Twitter: https://twitter.com/vanlightly
———————————————————————
COMPANY CONTACT DETAILS
ERLANG SOLUTIONS
- Website: https://www.erlang-solutions.com
- Twitter: https://www.twitter.com/ErlangSolutions
- LinkedIn: http://www.linkedin.com/company/erlan…
- GitHub: https://github.com/esl
Overview of Publish/Subscribe messaging and comparison of MQTT, AMQP and DDS protocols.
Presented in IoT Bratislava meeting
Recorded session (in Slovak): https://www.youtube.com/watch?v=7wqyriSAqLY
Installation of Grafana on linux ; connectivity with Prometheus database , installation of Prometheus ; Installation of node_exporter ,Tomcat-exporter ; installation and configuration of alert manager .. Detailed step by step installation and working
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Kafka Tutorial - introduction to the Kafka streaming platformJean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka?
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Best Practices for Streaming IoT Data with MQTT and Apache KafkaKai Wähner
Organizations today are looking to stream IoT data to Apache Kafka. However, connecting tens of thousands or even millions of devices over unreliable networks can create some architecture challenges. In this session, we will identify and demo some best practices for implementing a large scale IoT system that can stream MQTT messages to Apache Kafka.
We use HiveMQ as open source MQTT broker to ingest data from IoT devices, ingest the data in real time into an Apache Kafka cluster for preprocessing (using Kafka Streams / KSQL), and model training + inference (using TensorFlow 2.0 and its TensorFlow I/O Kafka plugin).
We leverage additional enterprise components from HiveMQ and Confluent to allow easy operations, scalability and monitoring.
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and LinkerdKai Wähner
Microservice architectures are not free lunch! Microservices need to be decoupled, flexible, operationally transparent, data aware and elastic. Most material from last years only discusses point-to-point architectures with inflexible and non-scalable technologies like REST / HTTP. This video takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed and efficiency.
Key takeaways:
- Apache Kafka decouples services, including event streams and request-response
- Kubernetes provides a cloud-native infrastructure for the Kafka ecosystem
- Service Mesh helps with security and observability at ecosystem / organization scale
- Envoy and Istio sit in the layer above Kafka and are orthogonal to the goals Kafka addresses
Blog post: http://www.kai-waehner.de/blog/2019/09/24/cloud-native-apache-kafka-kubernetes-envoy-istio-linkerd-service-mesh
Video recording of this slide deck: https://youtu.be/Us_C4RFOUrA
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Fundamentals and Architecture of Apache KafkaAngelo Cesaro
Fundamentals and Architecture of Apache Kafka.
This presentation explains Apache Kafka's architecture and internal design giving an overview of Kafka internal functions, including:
Brokers, Replication, Partitions, Producers, Consumers, Commit log, comparison over traditional message queues.
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Apache Kafka vs RabbitMQ: Fit For Purpose / Decision TreeSlim Baltagi
Kafka as a streaming data platform is becoming the successor to traditional messaging systems such as RabbitMQ. Nevertheless, there are still some use cases where they could be a good fit. This one single slide tries to answer in a concise and unbiased way where to use Apache Kafka and where to use RabbitMQ. Your comments and feedback are much appreciated.
Real-Life Use Cases & Architectures for Event Streaming with Apache KafkaKai Wähner
Streaming all over the World: Real-Life Use Cases & Architectures for Event Streaming with Apache Kafka.
Learn about various case studies for event streaming with Apache Kafka across industries. The talk explores architectures for real-world deployments from Audi, BMW, Disney, Generali, Paypal, Tesla, Unity, Walmart, William Hill, and more. Use cases include fraud detection, mainframe offloading, predictive maintenance, cybersecurity, edge computing, track&trace, live betting, and much more.
Join guest speaker Jack Vanlightly in Part II of this series of three webinars where we explore what RabbitMQ and Apache Kafka are and their approach to messaging.
Each technology has made very different decisions regarding every aspect of their design, each with strengths and weaknesses, enabling different architectural patterns.
WEBINAR LIVE DATE: Thursday 30 August 2018 | 17:30 CEST / 16:30 BST / 11:30 EDT / 08:30 PDT
———————————————————————
SPEAKER CONTACT DETAILS
JACK VANLIGHTLY - Jack Vanlightly is a cloud software architect based in Barcelona specialising in event-driven architectures, data processing pipelines and data stores both relational and non-relational.
Twitter: https://twitter.com/vanlightly
———————————————————————
COMPANY CONTACT DETAILS
ERLANG SOLUTIONS
- Website: https://www.erlang-solutions.com
- Twitter: https://www.twitter.com/ErlangSolutions
- LinkedIn: http://www.linkedin.com/company/erlan…
- GitHub: https://github.com/esl
Overview of Publish/Subscribe messaging and comparison of MQTT, AMQP and DDS protocols.
Presented in IoT Bratislava meeting
Recorded session (in Slovak): https://www.youtube.com/watch?v=7wqyriSAqLY
Installation of Grafana on linux ; connectivity with Prometheus database , installation of Prometheus ; Installation of node_exporter ,Tomcat-exporter ; installation and configuration of alert manager .. Detailed step by step installation and working
Jay Kreps is a Principal Staff Engineer at LinkedIn where he is the lead architect for online data infrastructure. He is among the original authors of several open source projects including a distributed key-value store called Project Voldemort, a messaging system called Kafka, and a stream processing system called Samza. This talk gives an introduction to Apache Kafka, a distributed messaging system. It will cover both how Kafka works, as well as how it is used at LinkedIn for log aggregation, messaging, ETL, and real-time stream processing.
Kafka Tutorial - introduction to the Kafka streaming platformJean-Paul Azar
Why is Kafka so fast? Why is Kafka so popular? Why Kafka?
Introduction to Kafka streaming platform. Covers Kafka Architecture with some small examples from the command line. Then we expand on this with a multi-server example. Lastly, we added some simple Java client examples for a Kafka Producer and a Kafka Consumer. We have started to expand on the Java examples to correlate with the design discussion of Kafka. We have also expanded on the Kafka design section and added references.
An Introduction to Confluent Cloud: Apache Kafka as a Serviceconfluent
Business breakout during Confluent’s streaming event in Munich, presented by Hans Jespersen, VP WW Systems Engineering at Confluent. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Best Practices for Streaming IoT Data with MQTT and Apache KafkaKai Wähner
Organizations today are looking to stream IoT data to Apache Kafka. However, connecting tens of thousands or even millions of devices over unreliable networks can create some architecture challenges. In this session, we will identify and demo some best practices for implementing a large scale IoT system that can stream MQTT messages to Apache Kafka.
We use HiveMQ as open source MQTT broker to ingest data from IoT devices, ingest the data in real time into an Apache Kafka cluster for preprocessing (using Kafka Streams / KSQL), and model training + inference (using TensorFlow 2.0 and its TensorFlow I/O Kafka plugin).
We leverage additional enterprise components from HiveMQ and Confluent to allow easy operations, scalability and monitoring.
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and LinkerdKai Wähner
Microservice architectures are not free lunch! Microservices need to be decoupled, flexible, operationally transparent, data aware and elastic. Most material from last years only discusses point-to-point architectures with inflexible and non-scalable technologies like REST / HTTP. This video takes a look at cutting edge technologies like Apache Kafka, Kubernetes, Envoy, Linkerd and Istio to implement a cloud-native service mesh to solve these challenges and bring microservices to the next level of scale, speed and efficiency.
Key takeaways:
- Apache Kafka decouples services, including event streams and request-response
- Kubernetes provides a cloud-native infrastructure for the Kafka ecosystem
- Service Mesh helps with security and observability at ecosystem / organization scale
- Envoy and Istio sit in the layer above Kafka and are orthogonal to the goals Kafka addresses
Blog post: http://www.kai-waehner.de/blog/2019/09/24/cloud-native-apache-kafka-kubernetes-envoy-istio-linkerd-service-mesh
Video recording of this slide deck: https://youtu.be/Us_C4RFOUrA
A brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will introduce some of the newer components of Kafka that will help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
Fundamentals and Architecture of Apache KafkaAngelo Cesaro
Fundamentals and Architecture of Apache Kafka.
This presentation explains Apache Kafka's architecture and internal design giving an overview of Kafka internal functions, including:
Brokers, Replication, Partitions, Producers, Consumers, Commit log, comparison over traditional message queues.
Kafka's basic terminologies, its architecture, its protocol and how it works.
Kafka at scale, its caveats, guarantees and use cases offered by it.
How we use it @ZaprMediaLabs.
Unleashing Real-time Power with Kafka.pptxKnoldus Inc.
Unlock the potential of real-time data streaming with Kafka in this session. Learn the fundamentals, architecture, and seamless integration with Scala, empowering you to elevate your data processing capabilities. Perfect for developers at all levels, this hands-on experience will equip you to harness the power of real-time data streams effectively.
AMIS SIG - Introducing Apache Kafka - Scalable, reliable Event Bus & Message ...Lucas Jellema
Introduction of Apache Kafka - the open source platform for real time message queuing and reliable, scalable, distributed event handling and high volume pub/sub implementation.
see GitHub https://github.com/MaartenSmeets/kafka-workshop for the workshop resources.
Building High-Throughput, Low-Latency Pipelines in Kafkaconfluent
William Hill is one of the UK’s largest, most well-established gaming companies with a global presence across 9 countries with over 16,000 employees. In recent years the gaming industry and in particular sports betting, has been revolutionised by technology. Customers now demand a wide range of events and markets to bet on both pre-game and in-play 24/7. This has driven out a business need to process more data, provide more updates and offer more markets and prices in real time.
At William Hill, we have invested in a completely new trading platform using Apache Kafka. We process vast quantities of data from a variety of feeds, this data is fed through a variety of odds compilation models, before being piped out to UI apps for use by our trading teams to provide events, markets and pricing data out to various end points across the whole of William Hill. We deal with thousands of sporting events, each with sometimes hundreds of betting markets, each market receiving hundreds of updates. This scales up to vast numbers of messages flowing through our system. We have to process, transform and route that data in real time. Using Apache Kafka, we have built a high throughput, low latency pipeline, based on Cloud hosted Microservices. When we started, we were on a steep learning curve with Kafka, Microservices and associated technologies. This led to fast learnings and fast failings.
In this session, we will tell the story of what we built, what went well, what didn’t go so well and what we learnt. This is a story of how a team of developers learnt (and are still learning) how to use Kafka. We hope that you will be able to take away lessons and learnings of how to build a data processing pipeline with Apache Kafka.
Messaging, storage, or both? The real time story of Pulsar and Apache Distri...Streamlio
Modern enterprises produce data at increasingly high volume and velocity. To process data in real time, new types of storage systems have been designed, implemented, and deployed. This presentation from Strata 2017 in New York provides an overview of Apache DistributedLog and Pulsar, real-time storage systems built using Apache BookKeeper and used heavily in production.
Matteo Merli and Sijie Guo from Streamlio gave a hands-on workshop on Apache Pulsar. #fast #durable #pubsub #messaging system. A low latency alternative to #kafka.
Kafka Summit SF 2017 - Best Practices for Running Kafka on Docker Containersconfluent
Docker containers provide an ideal foundation for running Kafka-as-a-Service on-premises or in the public cloud. However, using Docker containers in production environments poses some challenges – including container management, scheduling, network configuration and security, and performance. In this session, we’ll share lessons learned from implementing Kafka-as-a-Service with Docker containers.
Presented at Kafka Summit SF 2017 by Nanda Vijaydev
Reducing Microservice Complexity with Kafka and Reactive Streamsjimriecken
My talk from ScalaDays 2016 in New York on May 11, 2016:
Transitioning from a monolithic application to a set of microservices can help increase performance and scalability, but it can also drastically increase complexity. Layers of inter-service network calls for add latency and an increasing risk of failure where previously only local function calls existed. In this talk, I'll speak about how to tame this complexity using Apache Kafka and Reactive Streams to:
- Extract non-critical processing from the critical path of your application to reduce request latency
- Provide back-pressure to handle both slow and fast producers/consumers
- Maintain high availability, high performance, and reliable messaging
- Evolve message payloads while maintaining backwards and forwards compatibility.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
4. • PHP monolith, horizontally
scaled
• Single Database
• Any part of the system can
easily interact with any other
part of the system
• Local method calls
• Shared cache
• Shared database
The early days
Load balancers
Memcache + DB
5. • Smaller PHP monolith
• Lots of Scala microservices
• Multiple databases
• Distributed Systems
• Not local anymore
• Latency
• Failures, partial failures
Now
7. • As the number of services
increases, the coupling of them
tends to as well
• More network calls end up in
the critical path of the request
• Slows user experience
• More prone to failure
• Do all of them need to be?
Coupling
sendMessage()
1
2 3
4 5
9. • Decouple asynchronous
consumption of data/events
from the producer of that data.
• New consumers easily added
• No longer in the critical path of
the request, and fewer
potential points for failure
• Faster requests + happier
users!
Event Bus
sendMessage()
Event Bus
1
2 3
4
10. • High throughput
• High availability
• Durability
• Handle fast producers + slow consumers
• Multi-region/data center support
• Must have Scala and PHP clients
Requirements
12. • RabbitMQ (or some other flavour of AMQP)
• ØMQ
• Apache Kafka
Candidates
13. • ØMQ
• Too low level, would have to build a lot on top of it
• RabbitMQ
• Based on previous experience
• Doesn’t recover well from crashes
• Doesn’t perform well when messages are persisted to disk
• Slow consumers can affect performance of the system
Why not ØMQ or RabbitMQ?
14. • Simple - conceptually it’s just a log
• High performance - in use at large organizations (e.g. LinkedIn, Etsy,
Netflix)
• Can scale up to millions of messages per second / terabytes of data per day
• Highly available - designed to be fault tolerant
• High durability - messages are replicated across cluster
• Handles slow consumers
• Pull model, not push
• Configurable message retention
• Can work with multiple regions/data centers
• Written in Scala!
Why Kafka?
16. • Distributed, partitioned,
replicated commit log service
• Producers publish messages
to Topics
• Consumers pull + process the
feed of published messages
• Runs as a cluster of Brokers
• Requires ZooKeeper for
coordination/leader election
Kafka
P P P
C C C
ZK
Brokers
w/ Topics
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
17. • Split into Partitions (which are stored in log files)
• Each partition is an ordered, immutable sequence of messages
that is only appended to
• Partitions are distributed and replicated across the cluster of
Brokers
• Data is kept for a configurable retention period after which it is
either discarded or compacted
• Consumers keep track of their offset in the logs
Topics
18. • Push messages to partitions of topics
• Can send to
• A random/round-robined partition
• A specific partition
• A partition based on a hash constructed from a key
• Maintain per-key order
• Messages and Keys are just Array[Byte]
• Responsible for your own serialization
Producers
19. • Pull messages from partitions of topics
• Can either
• Manually manage offsets (“simple consumer”)
• Have offsets/partition assignment automatically managed (“high level
consumer”)
• Consumer Groups
• Offsets stored in ZooKeeper (or Kafka itself)
• Partitions are distributed among consumers
• # Consumers > # Partitions => Some consume nothing
• # Partitions > # Consumers => Some consume several partitions
Consumers
21. • Each cluster consists of a set of
Kafka brokers and a ZooKeeper
quorum
• At least 3 brokers
• At least 3 ZK nodes (preferably
more)
• Brokers have large disks
• Standard topic retention -
overridden per topic as necessary
• Topics are managed via Jenkins jobs
Clusters
ZK ZK
ZK
B B
B
22. • MirrorMaker
• Tool for consuming topics
from one cluster + producing
to another
• Aggregate + Local clusters
• Producers produce to local
cluster
• Consumers consume from
local + aggregate
• MirrorMaker consumes from
local + produces to aggregate
Multi-Region
ZK
Local
Aggregate
MirrorMaker
ZK
Local
Aggregate
MirrorMaker
Region 1 Region 2
PP
C C
24. • Wrote a thin Scala wrapper around the Kafka “New” Producer Java
API
• Effectively send(topic, message, [key])
• Use minimum “in-sync replicas” setting for Topics
• We set it to ceil(N/2 + 1) where N is the size of the cluster
• Wait for acks from partition replicas before committing to leader
Producing
25. • To produce from our PHP
components, we use a Scala
proxy service with a REST API
• We also produce directly from
MySQL by using Tungsten
Replicator and a filter that
converts binlog changes to
event bus messages and
produces them
Producing
Kafka
TR
26. • Wrote a thin Scala wrapper on top of the High-Level Kafka
Consumer Java API
• Abstracts consuming from Local + Aggregate clusters
• Register consumer function for a topic
• Offsets auto-committed to ZooKeeper
• Consumer group for each logical consumer
• Sometimes have more consumers than partitions (fault tolerance)
• Also have consumption mechanism for PHP/Python
Consuming
28. • Need to be able to serialize/deserialize messages in an efficient,
language agnostic way that tolerates evolution in message data
• Options
• JSON
• Plain text, everything understands it, easy to add/change fields
• Expensive to parse, large size, still have convert parsed JSON into domain
objects
• Protocol Buffers (protobuf)
• Binary, language-specific impls generated from an IDL
• Fast to parse, small size, generated code, easy to make
backwards/forwards compatible changes
Data -> Array[Byte] -> Data
29. • All of the messages we publish/consume from Kafka are serialized
protobufs
• We use ScalaPB (https://github.com/trueaccord/ScalaPB)
• Built on top of Google’s Java protobuf library
• Generates scala case class definitions from .proto
• Use only “optional” fields
• Helps forwards/backwards compatibility of messages
• Can add/remove fields without breaking
Protobuf
30. • You have to know the type of the serialized protobuf data before
you can deserialize it
• Potential solutions
• Only publish one type of message per topic
• Prepend a non-protobuf type tag in the payload
• The previous, but with protobufs inside protobufs
Small problem
31. • Protobuf that contains a list
• UUID string
• Payload bytes (serialized protobuf)
• Benefits
• Multiple objects per logical event
• Evolution of data in a topic
• Automatic serialization and
deserialization (maintain a
mapping of UUID-to-Type in each
language)
Message wrapper
UUID
Serialized protobuf payload bytes
32. • We use Kafka as a high-performance, highly-available
asynchronous event bus to decouple our services and reduce
complexity.
• Kafka is awesome - it just works!
• We use Protocol Buffers for an efficient message format that is
easy to use and evolve.
• Scala support for Kafka + Protobuf is great!
Wrapping up