There is a trend in the industry today back to using transactions based on next generation databases that provide strong consistency and global scale. And this consistency is the highest level of consistency - meaning more than just read my own writes. But also read before write, compare and swap, etc.
The Reactive Principles: Design Principles For Cloud Native ApplicationsJonas Bonér
Reactive Summit Keynote 2020: https://www.youtube.com/watch?v=e5kek8vx2ws
Abstract: Building applications for the cloud means embracing a radically different architecture than that of a traditional single-machine monolith, requiring new tools, practices, and design patterns. The cloud’s distributed nature brings its own set of concerns–building a Cloud Native, Edge Native, or Internet of Things (IoT) application means building and running a distributed system on unreliable hardware and across unreliable networks. In this keynote session by Jonas Bonér, creator of Akka, founder/CTO of Lightbend, and Chair of the Reactive Foundation, we’ll review a set of Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
Speakers: David Menninger, SVP and Research Director, Ventana Research + Joanna Schloss, Analytics, Data and Information Management Subject Matter Expert, Confluent
Can your organization react to customer events as they occur?
Can your organization detect anomalies before they cause problems?
Can your organization process streaming data in real time?
Real time and event-driven architectures are emerging as key components in developing streaming applications. Nearly half of organizations consider it essential to process event data within seconds of its occurrence. Yet less than one third are satisfied with their ability to do so today. In this webinar featuring Dave Menninger of Ventana Research, learn from the firm’s benchmark research about what streaming data is and why it is important. Joanna Schloss also joins to discuss how event-streaming platforms deliver real time actionability on data as it arrives into the business. Join us to hear how other organizations are managing streaming data and how you can adopt and deploy real time processing capabilities.
In this webinar you will:
-Get valuable market research data about how other organizations are managing streaming data
-Learn how real time processing is a key component of a digital transformation strategy
-Hear real world use cases of streaming data in action
-Review architectural approaches for adding real time, streaming data capabilities to your applications
Watch the recording: https://videos.confluent.io/watch/AoXiYayC1s23awqJBcQvPZ?
Slides for my architectural session at the event: Docker From Zero To Hero.
We talked about what kind of expertises are need in order to build a true Microservices Solution; you'll need to understand some of the fundamentals on which Microservices is built upon: SOA, EDA and DDD just to name a few, then you can move to the container world.
Original event link: https://www.eventbrite.it/e/biglietti-docker-from-zero-to-hero-83372825365#
Logicalis Cloud Briefing - get some "Cloud Clarity"!
Three perspectives on why and how to migrate to Cloud on your terms - change leadership, legal and technology considerations
This session will address Cassandra's tunable consistency model and cover how developers and companies should adopt a more Optimistic Software Design model.
There is a trend in the industry today back to using transactions based on next generation databases that provide strong consistency and global scale. And this consistency is the highest level of consistency - meaning more than just read my own writes. But also read before write, compare and swap, etc.
The Reactive Principles: Design Principles For Cloud Native ApplicationsJonas Bonér
Reactive Summit Keynote 2020: https://www.youtube.com/watch?v=e5kek8vx2ws
Abstract: Building applications for the cloud means embracing a radically different architecture than that of a traditional single-machine monolith, requiring new tools, practices, and design patterns. The cloud’s distributed nature brings its own set of concerns–building a Cloud Native, Edge Native, or Internet of Things (IoT) application means building and running a distributed system on unreliable hardware and across unreliable networks. In this keynote session by Jonas Bonér, creator of Akka, founder/CTO of Lightbend, and Chair of the Reactive Foundation, we’ll review a set of Reactive Principles that enable the design and implementation of Cloud Native applications–applications that are highly concurrent, distributed, performant, scalable, and resilient, while at the same time conserving resources when deploying, operating, and maintaining them.
Speakers: David Menninger, SVP and Research Director, Ventana Research + Joanna Schloss, Analytics, Data and Information Management Subject Matter Expert, Confluent
Can your organization react to customer events as they occur?
Can your organization detect anomalies before they cause problems?
Can your organization process streaming data in real time?
Real time and event-driven architectures are emerging as key components in developing streaming applications. Nearly half of organizations consider it essential to process event data within seconds of its occurrence. Yet less than one third are satisfied with their ability to do so today. In this webinar featuring Dave Menninger of Ventana Research, learn from the firm’s benchmark research about what streaming data is and why it is important. Joanna Schloss also joins to discuss how event-streaming platforms deliver real time actionability on data as it arrives into the business. Join us to hear how other organizations are managing streaming data and how you can adopt and deploy real time processing capabilities.
In this webinar you will:
-Get valuable market research data about how other organizations are managing streaming data
-Learn how real time processing is a key component of a digital transformation strategy
-Hear real world use cases of streaming data in action
-Review architectural approaches for adding real time, streaming data capabilities to your applications
Watch the recording: https://videos.confluent.io/watch/AoXiYayC1s23awqJBcQvPZ?
Slides for my architectural session at the event: Docker From Zero To Hero.
We talked about what kind of expertises are need in order to build a true Microservices Solution; you'll need to understand some of the fundamentals on which Microservices is built upon: SOA, EDA and DDD just to name a few, then you can move to the container world.
Original event link: https://www.eventbrite.it/e/biglietti-docker-from-zero-to-hero-83372825365#
Logicalis Cloud Briefing - get some "Cloud Clarity"!
Three perspectives on why and how to migrate to Cloud on your terms - change leadership, legal and technology considerations
This session will address Cassandra's tunable consistency model and cover how developers and companies should adopt a more Optimistic Software Design model.
The world of Microservices is a little different to standard service oriented architectures. They play to an opinionated score of decentralisation, isolation and automation. Stream processing comes from a different angle though. One where analytic function is melded to a firehose of in-flight events. Yet business applications increasingly need to be data intensive, nimble and fast to market. This isn’t as easy as it sounds.
This talk will look at the implications of mixing toolsets from the stream processing world into realtime business applications. How to effectively handle infinite streams, how to leverage a high throughput, persistent Log and deploy dynamic, fault tolerant, streaming services. By mixing these disparate domains we’ll see how we can build application architectures that can withstand the full force and immediacy of the data generation.
This presentation, given on the Software Architecture course at the University of Brunel, discusses the interplay between architecture and design. How the designer and architect are really different roles and ones that often have competing goals.
10 Tricks to Ensure Your Oracle Coherence Cluster is Not a "Black Box" in Pro...SL Corporation
Configuration of Oracle Coherence can be tricky. While Coherence provides highly valuable in-memory caching and parallel processing features, things don’t always go as planned, and changes can be extremely difficult to make once you’re in production. SL’s Founder and CTO, Tom Lubinski covers 10 things you can do to ensure your Coherence cluster is easy to support in production.
A popular pattern today is the injection of declarative (or functional) mini-languages into general purpose host languages. Years ago, this is what LINQ for C# was all about. Now there are many more examples such as the Spark or Beam APIs for Java and Scala. The opposite embedding is also possible: start with a declarative (or functional) language as the outer host and then embed a general purpose language. This is the path we took for Scope years ago (Scope is a Microsoft-internal big data analytics language) and have recently shipped as U-SQL. In this case, the host language is close to T-SQL (Transact SQL is Microsoft’s SQL language for SQL Server and Azure SQL DB) and the embedded language is C#. By embedding the general purpose language in a declarative language, we enable all-of-program (not just all-of stage) optimization, parallelization, and scheduling. The resulting jobs can flexibly scale to leverage thousands of machines.
Unlocking the Power of Salesforce Integrations with ConfluentAaronLieberman5
Salesforce currently has 150,000 customers across the world who use Salesforce in some capacity. If you are one of those customers, you've likely had to work through how to integrate it with your other back office systems: ERP, Marketing Automation, BI systems, etc. Or perhaps you're a brand new Salesforce customer and are just now trying to understand what options exist for integration.
It is undeniable that the rate of integrating with Salesforce is increasing, and extracting the valuable data that is in Salesforce is not always an easy feat when you have to consider how to do this best in your own unique environment.
In this webinar, Big Compass and Confluent will talk about the various techniques for getting data out of Salesforce, and how Confluent and Kafka can play an integral role in not only brokering these messages at an incredibly fast and scalable rate, but to also make it very easy to exchange data with Salesforce.
YOU WILL LEARN:
What integration capabilities exist within Salesforce
How Confluent can be used to integrate with Salesforce
Techniques in Confluent for pub/sub, streaming, and building business logic using KSQL and Kafka Streams
Patterns of Salesforce integration in general and specifically with Confluent
Strengths and weaknesses of each pattern and scenarios where they work best
WHO SHOULD ATTEND:
IT leaders who are looking for the most efficient methods for integration with Salesforce
Developers/System Integrators who are interested in seeing Salesforce integration techniques
Anyone in the Salesforce ecosystem who is interested in integration
REASONS TO ATTEND:
Learn about methods of Salesforce integration and explore Confluent’s built-in capabilities if you're considering an off-the-shelf solution
An ode to the underrepresented and underused pattern of events and asynchrony in the design and development of Microservices.
Prepared by Saul Caganoff, and delivered by Saul at Melbourne Microservices, and by Yamen Sader at Sydney Microservices.
Data to Insight to Action Today’s flat and wired world and increased competitiveness has shifted the focus more and more towards the customer. It is an absolute necessity that the companies understand their customers well and address their needs proactively.
To download visit:
blog.cequitysolutions.com and www.cequitysolutions.com
this is the slides from the talk i gave at DevGeekWeek2014
further details are in my blog: http://blogs.microsoft.co.il/iblogger/2014/06/25/devgeekweek-2014-slides-and-demos/
NATS was created by Derek Collison, founder and CEO
of Apcera, who has spent 20+ years designing, building, and using publish-subscribe messaging systems.
Unlike traditional enterprise messaging systems, NATS has an always-on dial tone that does whatever it takes to remain available. Learn how end users are building modern, reliable and scalable cloud and distributed systems with NATS.
Talk given by David Williams, Principal, Williams & Garcia
You can learn more about NATS at http://www.nats.io
To view recording of this webinar please use below URL:
http://wso2.com/library/webinars/2015/09/event-driven-architecture/
Enterprise systems today are moving towards being dynamic where change has become the norm rather than the exception. Such systems need to be loosely coupled, autonomous, versatile and adaptive. There arises the need to model such systems, and event driven architecture (EDA) is how such systems can be modelled and explained.
This webinar will discuss
The basics of EDA
How it can benefit your enterprise
How the WSO2 product stack complements this architectural pattern
Overcoming Data Gravity in Multi-Cloud Enterprise ArchitecturesVMware Tanzu
Enterprise architectures never sleep because cloud-first strategies must also become multi-cloud-first strategies. Public cloud providers such as Microsoft Azure are providing compelling services and pricing. And, most enterprises now consider their own datacenter a private cloud.
This is not a one-cloud playing field and enterprise architects must develop strategies, standards, and policies about how their data is being used, moved, and created across multiple cloud infrastructures.
Join Pivotal’s Jag Mirani and Mike Stolz along with guest, Forrester Vice President and Principal Analyst, Mike Gualtieri, as they examine the trends driving multi-cloud adoption and more importantly how to architect technical solutions to make data free to roam among them safely.
Speakers:
Mike Gualtieri, VP, PRINCIPAL ANALYST, Forrester
Jag Mirani, Product Marketing, Data Services, Pivotal
Mike Stolz, Product Lead, GemFire, Pivotal
The world of Microservices is a little different to standard service oriented architectures. They play to an opinionated score of decentralisation, isolation and automation. Stream processing comes from a different angle though. One where analytic function is melded to a firehose of in-flight events. Yet business applications increasingly need to be data intensive, nimble and fast to market. This isn’t as easy as it sounds.
This talk will look at the implications of mixing toolsets from the stream processing world into realtime business applications. How to effectively handle infinite streams, how to leverage a high throughput, persistent Log and deploy dynamic, fault tolerant, streaming services. By mixing these disparate domains we’ll see how we can build application architectures that can withstand the full force and immediacy of the data generation.
This presentation, given on the Software Architecture course at the University of Brunel, discusses the interplay between architecture and design. How the designer and architect are really different roles and ones that often have competing goals.
10 Tricks to Ensure Your Oracle Coherence Cluster is Not a "Black Box" in Pro...SL Corporation
Configuration of Oracle Coherence can be tricky. While Coherence provides highly valuable in-memory caching and parallel processing features, things don’t always go as planned, and changes can be extremely difficult to make once you’re in production. SL’s Founder and CTO, Tom Lubinski covers 10 things you can do to ensure your Coherence cluster is easy to support in production.
A popular pattern today is the injection of declarative (or functional) mini-languages into general purpose host languages. Years ago, this is what LINQ for C# was all about. Now there are many more examples such as the Spark or Beam APIs for Java and Scala. The opposite embedding is also possible: start with a declarative (or functional) language as the outer host and then embed a general purpose language. This is the path we took for Scope years ago (Scope is a Microsoft-internal big data analytics language) and have recently shipped as U-SQL. In this case, the host language is close to T-SQL (Transact SQL is Microsoft’s SQL language for SQL Server and Azure SQL DB) and the embedded language is C#. By embedding the general purpose language in a declarative language, we enable all-of-program (not just all-of stage) optimization, parallelization, and scheduling. The resulting jobs can flexibly scale to leverage thousands of machines.
Unlocking the Power of Salesforce Integrations with ConfluentAaronLieberman5
Salesforce currently has 150,000 customers across the world who use Salesforce in some capacity. If you are one of those customers, you've likely had to work through how to integrate it with your other back office systems: ERP, Marketing Automation, BI systems, etc. Or perhaps you're a brand new Salesforce customer and are just now trying to understand what options exist for integration.
It is undeniable that the rate of integrating with Salesforce is increasing, and extracting the valuable data that is in Salesforce is not always an easy feat when you have to consider how to do this best in your own unique environment.
In this webinar, Big Compass and Confluent will talk about the various techniques for getting data out of Salesforce, and how Confluent and Kafka can play an integral role in not only brokering these messages at an incredibly fast and scalable rate, but to also make it very easy to exchange data with Salesforce.
YOU WILL LEARN:
What integration capabilities exist within Salesforce
How Confluent can be used to integrate with Salesforce
Techniques in Confluent for pub/sub, streaming, and building business logic using KSQL and Kafka Streams
Patterns of Salesforce integration in general and specifically with Confluent
Strengths and weaknesses of each pattern and scenarios where they work best
WHO SHOULD ATTEND:
IT leaders who are looking for the most efficient methods for integration with Salesforce
Developers/System Integrators who are interested in seeing Salesforce integration techniques
Anyone in the Salesforce ecosystem who is interested in integration
REASONS TO ATTEND:
Learn about methods of Salesforce integration and explore Confluent’s built-in capabilities if you're considering an off-the-shelf solution
An ode to the underrepresented and underused pattern of events and asynchrony in the design and development of Microservices.
Prepared by Saul Caganoff, and delivered by Saul at Melbourne Microservices, and by Yamen Sader at Sydney Microservices.
Data to Insight to Action Today’s flat and wired world and increased competitiveness has shifted the focus more and more towards the customer. It is an absolute necessity that the companies understand their customers well and address their needs proactively.
To download visit:
blog.cequitysolutions.com and www.cequitysolutions.com
this is the slides from the talk i gave at DevGeekWeek2014
further details are in my blog: http://blogs.microsoft.co.il/iblogger/2014/06/25/devgeekweek-2014-slides-and-demos/
NATS was created by Derek Collison, founder and CEO
of Apcera, who has spent 20+ years designing, building, and using publish-subscribe messaging systems.
Unlike traditional enterprise messaging systems, NATS has an always-on dial tone that does whatever it takes to remain available. Learn how end users are building modern, reliable and scalable cloud and distributed systems with NATS.
Talk given by David Williams, Principal, Williams & Garcia
You can learn more about NATS at http://www.nats.io
To view recording of this webinar please use below URL:
http://wso2.com/library/webinars/2015/09/event-driven-architecture/
Enterprise systems today are moving towards being dynamic where change has become the norm rather than the exception. Such systems need to be loosely coupled, autonomous, versatile and adaptive. There arises the need to model such systems, and event driven architecture (EDA) is how such systems can be modelled and explained.
This webinar will discuss
The basics of EDA
How it can benefit your enterprise
How the WSO2 product stack complements this architectural pattern
Overcoming Data Gravity in Multi-Cloud Enterprise ArchitecturesVMware Tanzu
Enterprise architectures never sleep because cloud-first strategies must also become multi-cloud-first strategies. Public cloud providers such as Microsoft Azure are providing compelling services and pricing. And, most enterprises now consider their own datacenter a private cloud.
This is not a one-cloud playing field and enterprise architects must develop strategies, standards, and policies about how their data is being used, moved, and created across multiple cloud infrastructures.
Join Pivotal’s Jag Mirani and Mike Stolz along with guest, Forrester Vice President and Principal Analyst, Mike Gualtieri, as they examine the trends driving multi-cloud adoption and more importantly how to architect technical solutions to make data free to roam among them safely.
Speakers:
Mike Gualtieri, VP, PRINCIPAL ANALYST, Forrester
Jag Mirani, Product Marketing, Data Services, Pivotal
Mike Stolz, Product Lead, GemFire, Pivotal
Cloud Migration headache? Ease the pain with Data Virtualization! (EMEA)Denodo
Watch full webinar here: https://bit.ly/3CWIBzd
Moving data to the Cloud is a priority for many organizations. Benefits - in terms of flexibility, agility, and cost savings - are driving Cloud adoption. This journey to the Cloud is not easy: moving application(s) and data to the Cloud can be challenging and entails disruption of business, when not carefully managed.
When systems are being migrated, the resultant hybrid (or even multi-) Cloud architecture is, by definition, more complex AND making it harder/more costly to retrieve the data we need.
Data Virtualization can help organizations at all stages of a Cloud journey - during migration as well as in our “new hybrid multi-Cloud reality”
Watch on-demand this webinar to learn how Data Virtualization can:
- Help organizations manage risk and minimize the disruption caused as systems are moved to the Cloud
- Provide a single point of access for data that is both on-premise and in the Cloud, making it easier for users to find and access the data that they need
- Provide a secure layer to protect and manage data when it's distributed across hybrid or multi-Cloud architectures
… watch a live demo about how to ease the migration.
Govern and Protect Your End User InformationDenodo
Watch this Fast Data Strategy session with speakers Clinton Cohagan, Chief Enterprise Data Architect, Lawrence Livermore National Lab & Nageswar Cherukupalli, Vice President & Group Manager, Infosys here: https://buff.ly/2k8f8M5
In its recent report “Predictions 2018: A year of reckoning”, Forrester predicts that 80% of firms affected by GDPR will not comply with the regulation by May 2018. Of those noncompliant firms, 50% will intentionally not comply.
Compliance doesn’t have to be this difficult! What if you have an opportunity to facilitate compliance with a mature technology and significant cost reduction? Data virtualization is a mature, cost-effective technology that enables privacy by design to facilitate compliance.
Attend this session to learn:
• How data virtualization provides a compliance foundation with data catalog, auditing, and data security.
• How you can enable single enterprise-wide data access layer with guardrails.
• Why data virtualization is a must-have capability for compliance use cases.
• How Denodo’s customers have facilitated compliance.
"Using Multi-Master data replication for the parallel-run refactoring", Myros...Fwdays
It is a story about a brave team who took the decision of taking over the parallel-run pattern among others and afforded to introduce and handle a multi-master system as a temporary step in a refactoring process.
Expert IT analyst groups like Wikibon forecast that NoSQL database usage will grow at a compound rate of 60% each year for the next five years, and Gartner Groups says NoSQL databases are one of the top trends impacting information management in 2013. But is NoSQL right for your business? How do you know which business applications will benefit from NoSQL and which won't? What questions do you need to ask in order to make such decisions?
If you're wondering what NoSQL is and if your business can benefit from NoSQL technology, join DataStax for the Webinar, "How to Tell if Your Business Needs NoSQL". This to-the-point presentation will provide practical litmus tests to help you understand whether NoSQL is right for your use case, and supplies examples of NoSQL technology in action with leading businesses that demonstrate how and where NoSQL databases can have the greatest impact."
Speaker: Robin Schumacher, Vice President of Products at DataStax
Robin Schumacher has spent the last 20 years working with databases and big data. He comes to DataStax from EnterpriseDB, where he built and led a market-driven product management group. Previously, Robin started and led the product management team at MySQL for three years before they were bought by Sun (the largest open source acquisition in history), and then by Oracle. He also started and led the product management team at Embarcadero Technologies, which was the #1 IPO in 2000. Robin is the author of three database performance books and frequent speaker at industry events. Robin holds BS, MA, and Ph.D. degrees from various universities.
SpringPeople - Introduction to Cloud ComputingSpringPeople
Cloud computing is no longer a fad that is going around. It is for real and is perhaps the most talked about subject. Various players in the cloud eco-system have provided a definition that is closely aligned to their sweet spot –let it be infrastructure, platforms or applications.
This presentation will provide an exposure of a variety of cloud computing techniques, architecture, technology options to the participants and in general will familiarize cloud fundamentals in a holistic manner spanning all dimensions such as cost, operations, technology etc
Polyglot persistence for enterprise cloud applicationsLars Lemos
Presentantion for the thesis of Master of Computer Applications.
Describes the problems faced by developing an enterprise applicaition taking into consideration the cap theorem and the resources provided by cloud computing.
Databases through out and beyond Big Data hypeParinaz Ameri
NoSQL databases are one of the most successful technologies of Big Data era. Database community has faced many challenges even beyond Big Data era and presented several solutions. Despite all achievements, distributed transactions with external consistency remain to be one of the hardest computer science problems.
¿Cómo modernizar una arquitectura de TI con la virtualización de datos?Denodo
Ver: https://bit.ly/347ImDf
En la era digital, la gestión eficiente de los datos es un factor fundamental para optimizar la competitividad de las empresas. Sin embargo, la mayoría de ellas se enfrentan a silos de datos, lo que hace que su tratamiento sea lento y costoso. Además, la velocidad, la diversidad y el volumen de los datos pueden superar las arquitecturas de TI tradicionales.
¿Cómo mejorar la entrega de datos para extraer todo su valor?
¿Cómo conseguir que los datos estén disponibles y poder utilizarlos en tiempo real?
Los expertos de Vault IT y Denodo te proponen este webinar para descubrir cómo la virtualización de datos permite modernizar una arquitectura de TI en un contexto de transformación digital.
Evolving From Monolithic to Distributed Architecture Patterns in the CloudDenodo
Watch full webinar here: https://goo.gl/rSfYKV
Gartner states in its Predicts 2018: Data Management Strategies Continue to Shift Toward Distributed,
“As data management activities are becoming more widespread in both distributed processing use cases, like IoT, and demands for new types of data, emerging roles such as data scientists or data engineers are expected to be driving the new data management requirements in the coming two years. These trends indicate that both the collection of data as well as the need to connect to data are rapidly becoming the new normal, and that the days of a single data store with all the data of interest — the enterprise data warehouse — are long gone.”
Data management solutions are becoming distributed, heterogeneous and extremely diverse.
Attend this session to learn:
• How to evolve architecture patterns in the cloud using data virtualization.
• How data virtualization accelerates cloud migration and modernization.
• Successful cloud implementation case studies.
Presentation on general use cases of MongoDB on Financial Services industry. Over this presentation we discussed why MongoDB is ideal to large datasets analytics, realtime processing, quants analysis and other interesting aspects that make it ideal for FS projects.
Watch full webinar here: https://bit.ly/3puUCIc
What is Data Virtualization and why do I care? In this webinar we intend to help you understand not only what Data Virtualization is but why it's a critical component of any organization's data fabric and how it fits. How data virtualization liberates and empowers your business users via data discovery, data wrangling to generation of reusable reporting objects and data services. Digital transformation demands that we empower all consumers of data within the organization, it also demands agility too. Data Virtualization gives you meaningful access to information that can be shared by a myriad of consumers.
Watch on-demand this session to learn:
- What is Data Virtualization?
- Why do I need Data Virtualization in my organization?
- How do I implement Data Virtualization in my enterprise? Where does it fit?
Microservices as an evolutionary architecture: lessons learnedLuram Archanjo
Over the years the architecture of microservices has been widely adopted, since it provides numerous advantages such as: technological heterogeneity, scalability, decoupling and so on.
In this sense the microservices architecture meets the definitions of an evolutionary architecture, that is, an architecture designed for incremental changes even changes of languages.
In this lecture, we will discuss the decisions to adopt frameworks and techniques such as: Spring, Vert.x, gRPC, Event-driven Architecture in an architecture for a payment medium solution in which throughput and response time is crucial for the survival of the business .
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
2. Ryan Knight
● CEO / CTO of Grand Cloud - Boutique consulting company working
at the intersection of Distributed Systems and Data Engineering
● Experience ranges across traditional software development and
architecture to sales engineering, consulting, solution architecture
and developer advocacy.
● Worked across wide range of companies from small startups such as
Lightbend and DataStax to Large Corporations such as Starbucks
and Capital One.
● Consulting Experience spans over 50 companies and 10 Countries
● Currently Consulting at Brighthouse Financial
3. Distributed System Design
Heart of distributed system design is a requirement
for a consistent, performant, and reliable way of
managing data - Jonas Bonér
4. Cloud Native -> New Requirements
Users: 1 million+ Data volume: TB–PB–EB
Locality: Global
Performance: Milliseconds–microseconds
Request rate: Millions
Access: Web, mobile, IoT, devices
Scale: Up-down, Out-in
Economics: Pay for what you use
Developer access: No assembly required
9. Challenges with Application Tier Consistency
● Consistency problems are far harder to solve in the application tier
● Increased Corner Case Bugs
○ Consistency is really hard to get right in the Application Tier!
○ Consistency is really hard to test and verify
● Increased Complexity
10. Business Impact of Consistency
● Travel Booking of Flight, Hotel, etc. - Inconsistencies could either
lead to double bookings or lost bookings.
● Rewards Program - Very difficult to prevent fraudulent redemptions.
Potential for monetary loss.
● Physical Allocation of Resources vs. Digital Realm
● Inventory / Limited Sales
11. Direct Business Value of “Strong Consistency”
● Increases accuracy of sales and reduces lost business revenue
● Cost Savings with reduced operational complexity and increased
visibility into business operations.
● Weak Consistency is a Security Concern - Possible financial loss
from inconsistent views of data.
● ACIDRain Attack - Todd Warszawski, Peter Bailis
○ 22 critical ACIDRain attacks that allow attackers to corrupt store
inventory,over-spend gift cards, and steal inventory.
○ Bankrupt popular Bitcoin exchange
12. Eventual Consistency
● Internet of Things
● Media
● Retail
● Real-time Analytics
● Time-Series
● Monitoring
● Customer 360
Strong Consistency
● Financial Transactions
● Rewards Programs
● Inventory Management
● Global Meta-Data
● Travel Reservations
● Gaming
● Billing / Payments
● Ad Tech
14. Challenges with Understanding Consistency
● Lots of Definitions of Consistency
● Consistency in ACID is about enforcing invariants
○ Data must be valid according to all defined rules
○ Not the consistency we are looking for
● "Strong consistency" - term used to differentiate full
consistency from weaker levels of consistency such as
casual or session consistency.
15. Consistency Challenges
Dirty Reads - Read Uncommitted Write
Read Skew / Non-Repeatable Reads
Read your own Writes
Lost Updates
Write Skew
16. Write Skew
Two concurrent
transactions each
determine what they are
writing based on reading
a data set which overlaps
what the other is writing
begriffs.com
17. Consistency Models
Credit to Peter Bailis and Aphyr at jepsen.io
http://www.bailis.org/blog/linearizability-versus-serializability/
18. Linearizability
● Guarantees that the order of reads and writes to a single
register or row will always appear the same on all nodes.
● Appearance that there is only one copy of the data.
● It doesn’t group operations into transactions.
● Guarantees read-your-write behavior.
19. Linearizable Consistency in CAP
● CAP Theorem is about “atomic consistency”
● Atomic consistency refers only to a property of a single
request/response operation sequence.
● Strong Consistency in CAP is Linearizability
20. Serializable Consistency
● Transaction Isolation
● Database guarantees that two transactions have the same
effect as if they where run serially.
● multi-operation, multi-object, arbitrary total order
21. Strict Serializability
● Linearizability plus Serializability provides Strict
Serializability
● Highest level of Consistency
● Guarantee ordering and transaction isolation
22. Linearizable vs. Serializable Consistency
● Serializability - multi-operation, multi-object, arbitrary total order
● Linearizability - single-operation, single-object, real-time order
● Strict Serializability - Linearizability plus Serializability provides Strict
Serializability
Peter Bailis - Linearizability versus Serializability
23. No One Solution to Consistency
● Do you want your data right or right now? - Pat Helland
● PACELC Theorem -> More than CAP
○ In the absence of network partitions the trade-off is
between latency and consistency - Daniel Abadi
● Evaluate trade-offs in the differing approaches
25. From Monolith to Microservices to Serverless
● Data Consistency was easy in a monolith application -
single source of truth w/ ACID transactions
● Move to microservices each service became a bounded
context that owns and manages its data.
● Data Consistency became very difficult w/ microservices
● Serverless increases the complexity even more
26. Consistency Challenges with Data in Microservices
● Traditional ACID transactions did not scale
● Data orchestration between multiple services - Number of
Microservices Increases Number of Interactions
● Stateful or Stateless
● Data rehydration for things like service failures and rolling
updates.
27. Popularity of Eventual Consistency
CAP Theorem
• Force choice between Global Scale or Strong Consistency
Eventual Consistency
• Sacrificed consistency for availability and partition tolerance.
• Really a Necessary Evil
• Write now and figure it out later
Pushed complexity of managing consistency to application tier
29. Value of Consistency in the Database
● Decrease Application Tier Complexity
● Reduce Cognitive Overhead
● Increased Developer Productivity
● Increased Focus on Business Value
● Most implementations also provide strong atomicity and isolation
● Push complexity of consistency back to the database
● Not a panacea for all data consistency challenges
30. Case Study - AdStage
● Recently migrated from Cassandra to Postgres
● Leverage Postgres DB Transactions
● Found Postgres to be extremely capable with advance
data model and query capabilities
● Significant decrease in application and operational
complexity
● Significantly reduced operational costs
31. Leveraging DB Consistency
● Ledger Pattern with Compare and Swap Like Operation
● Application reads latest ledger id from DB
● Application makes an update with what it thinks is the latest
ledger id plus one
● DB transaction / stored procedure to read the last ledger id and
make the update if the ledger id is greater than the last entry
● If update fails DB returns correct Ledger ID
32. Traditional / Hybrid NoSQL DB’s
● Cloud Operated Relation DB’s are a re-emerging trend.
● Cloud SQL w/ Postgres or MySQL
● AWS Aurora - Amazon re-designed MySQL as a
cloud-native relational database
● AWS Dynamo w/ Transactions - Multiple Object with limits
to single region
33. Next Generation Databases
● Google Spanner - Horizontally scalable, globally consistent, relational database
service. Relies on on Proprietary Atomic Clocks and Low Latency Network.
● Coackroach & YugaByte - Open Source version of Spanner with 2 Phase
Commits and Hybrid-Logical Clocks
● Fauna - Single Phase Commit with no hard dependency on clocks
● FoundationDB - Serializable Optimistic MVCC concurrency. Loosely based on
Google Percolator
● TiDB - Hybrid Transactional and Analytical Processing (HTAP) workloads.
Features “horizontal scalability, strong consistency, and high availability.”
● Microsoft Azure Cosmos DB - Configurable consistency guarantees
34. Transactions are hard. Distributed transactions are
harder. Distributed transactions over the WAN are
final boss hardness. I'm all for new DBMSs but
people should tread carefully. - Andy Pavlo
New Generation / Global Transactional Databases
35. Not All Global Databases are the Same
● Differences in Transaction Protocol
● Global Ordering Done in a Single Phase vs. Multi-Phase
● Pre or Post Commit Transaction Resolution
● Different levels of consistency
● Maximum scope of a transaction - Single Record vs. Multiple
Records
● Geographic limits of transactions - Single Region vs. Global?
● Storage Layer is an entirely other discussion beyond the
transaction protocol. Large impact on performance and stability!
36. Week Isolation Level
Scope of Transaction -
Single Row
Eventually Consistent
Strongest Isolation Level
Scope of Transaction -
Distributed Across
Partitions
Serializable Consistency
Consistency and the ACID Spectrum
37. Consistency Levels in Next Gen Databases - 1/2
● Google Spanner - External strong consistency across rows, regions, and
continents.
● Yugabyte - snapshot isolation, not serializability yet, writes must go to
partition leaders. Reliance on hybrid clocks makes it difficult to run in
virtualized environments.
● Cockroach - serializability but not strict serializability, reads and writes must
go to partition leaders, no replica reads allowed
38. Consistency Levels in Next Gen Databases - 2/2
● TiDB - read-committed within a datacenter, no serializability, timestamp oracle
must issue leases for all write transactions, replica reads unclear
● FoundationDb: Serializable Snapshot Isolation and strictly serializable within a
datacenter, timestamp oracle must issue leases for all serializable reads and
all writes, snapshot reads possible
● FaunaDB - Global pre-ordering of transactions provides strict serializable consistency
● Azure Cosmos DB - Five consistency models allow developer to choose between
latency and consistency. Highest Level of consistency is strong consistency with
linearizability guarantees. Doesn’t seem to be strict serializable?
41. Advantages of Application Tier Consistency
● Low Read / Write Latency
● High-Throughput
● Read your Writes - Same session only
● Requires application to enforce session stickiness
42. Disadvantages of Application Tier Consistency
● Consistency problems are far harder to solve in the
application tier
● Increased Complexity
● No Isolation and limited atomicity
● Corner Case Bugs - Consistency is really hard to test and
verify
● No magic pattern or technology that you can sprinkle on
data to make it consistent.
43. Options for Application Tier Consistency
● Serialization Points - i.e. Kafka Consumers pinned to session id’s.
● Akka Clustering - Stateful Services pinned to a client id.
● CRDT - Conflict Free Replicated Data Types, i.e. Associative
Counters. Data must be of a certain shape to work.
● Event Sourcing / Append Only Logging with Aggregates for running
totals. Hard to provide consistency guarantees across aggregates.
● Saga Pattern - Builds on Event Sourcing and uses a Central
Coordinator to manages complex transaction logic. Relies heavily on
idempotent services that can roll back transactions in the face of
failures.
45. CRDT’s
● CRDT - Conflict Free Replicated Data Types
● Data types that guarantee convergence to the same value without any
synchronization mechanism
● Consistency without Consensus
● Avoid distributed locks, two-phase commit, etc. Data Structure that
tells how to build the value
● Sacrifice linearizability (guaranteed ordering ) while remaining correct
46. Overview of Saga Pattern
● Central Coordinator
● Manages Complex Transaction Logic
● State managed in an distributed log
● Split work into idempotent executors / requests
● Requires compensating transactions for dealing with failures /
aborting transaction
● Effectively Once instead of Exactly Once
47. The Challenges with the Saga Pattern
● Consistency is reliant on the consistency of the distributed log
● Limited Consistency
● Weak Isolation
● No Guaranteed Atomicity - Unsafe partially committed states
● Complexity with versioning of Saga Logic
● Increased application complexity
● Rollback and recovery logic required in application tier
● Idempotency impossible for some services
● Effectively Once instead of Exactly Once
51. Spanner
● External consistency, an isolation level even stricter than strict serializability
● Relation Integrity Constraints
● 99.999% availability SLA
● Uses a global commit timestamps to guarantee ordering of transactions via the
TrueTime API.
● Multiple Shards with 2PC
● Single Shard Avoids 2PC for Writes / Read-only Transactions also avoid 2 PC
● No Downtime upgrades - Maintenance done by moving data between nodes
● Downside is cost and some limitations to the SQL model and schema design
52. CoackroachDB
● Open source Database Inspired by Spanner
● Hybrid Logical Clock similar to a vector clock for ordering of transactions
● Challenges with clock skew - waits up to 250 MS on reads
● Provides linearizability on single key and overlapping keys
● Transactions that span disjoint set of key it only provides serializability and not
linearizability
● Some edge cases cause anomalies called “casual reverse” - Jepsen
● “Enterprise-only” features like row-level replication zones
● Supports migrating by supporting PostgreSQL syntax and drivers, however it does
not offer exact compatibility.
53. YugaByte
● Another Database Inspired by Spanner that relies on Hybrid Logical Clocks
● Currently only supports snapshot isolation
● Serializable isolation level work in progress
● Distributed Transactions to multiple partitions require a provisional record or
temporary table
54. FaunaDB - Consistency without Clocks
● Transaction resolution based on the Calvin protocol - pre-ordering of transactions
before commit
● Global transaction ordering provides serializable consistency
● Transactions can include multiple rows - not restricted to data in a single row or
shard
● Distributed log based algorithm scales throughput with cluster size by partitioning
the log
● Low Latency Snapshot Reads
● Proprietary Query Language with a high learning curve
● Optimistic concurrency model can causes high number of failures with highly
contentious workloads
55. References
● Bla-bla-microservices-bla-bla http://jonasboner.com/bla-bla-microservices-bla-bla/
● Aphyr Strong consistency models -
https://aphyr.com/posts/313-strong-consistency-models
● Achieving ACID Transactions in a Globally Distributed Database from FaunaDB
● Peter Bailis - Linearizability versus Serializability
● Calvin: fast distributed transactions for partitioned database systems