The document provides an overview of governance for OpenMAMA, an open source integration layer for capital markets. It notes existing pain points such as incumbent lock-in, increasing complexity, and limited resources. OpenMAMA aims to address these issues by providing a flexible, vendor-neutral platform that reduces costs and complexity while allowing collaboration. The governance structure includes a steering committee for leadership and a technical committee for implementation. Participation is encouraged to help guide the project.
PaNDA - a platform for Network Data Analytics: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. PaNDA is a platform for data aggregation and distribution which can be used for data analytics applications being developed at Cisco. PaNDA was incubated in Intercloud and is now being further developed for the Virtual Managed Services (VMS) solution and other Cisco solutions. The session will details why we need a platform for OSS analytics and then how we tackle this point.
Stream processing consists of ingesting and processing continuously generated data, often from end users in web applications or from more challenging settings where devices such as servers and sensors generate events at a high rate. Such scenarios often demand the use of a software stack that is able to scale and accommodate changes to the characteristics of the application.
One of the major challenges with processing data streams is adapting to workload variations (e.g., due to daily cycles or the growth of the population of sources). Systems to ingest stream data typically parallelize it by sharding the incoming messages and events according to a routing key. Having the ability to parallelize ingestion is very effective, but future changes to the workload (which are very often unknown beforehand) might make the initial choice for the degree of parallelism inadequate for even short-term spikes. Consequently, the ability to scale by adapting parallelism according to workload while preserving important API properties, such as per-key order, is highly desirable to handle mission-critical workloads.
In this presentation, we explain how to accommodate changes to workloads in and with Pravega, an open source stream store built to ingest and serve stream data. Pravega primarily manipulates and stores segments (append-only byte sequences), forming streams by creating and composing segments, which it uses to enable the scaling of streams. Stream scaling in Pravega is automatic and transparent to the application, but such a change to the ingestion volume might also require the application to follow and scale its resources downstream (e.g., the operators of an Apache Flink job) to accommodate the new ingestion volume. Pravega signals such changes to the application so that it can react accordingly. The cooperation between Pravega and the downstream application is crucial for building an effective stream data pipeline.
PaNDA - a platform for Network Data Analytics: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. PaNDA is a platform for data aggregation and distribution which can be used for data analytics applications being developed at Cisco. PaNDA was incubated in Intercloud and is now being further developed for the Virtual Managed Services (VMS) solution and other Cisco solutions. The session will details why we need a platform for OSS analytics and then how we tackle this point.
Stream processing consists of ingesting and processing continuously generated data, often from end users in web applications or from more challenging settings where devices such as servers and sensors generate events at a high rate. Such scenarios often demand the use of a software stack that is able to scale and accommodate changes to the characteristics of the application.
One of the major challenges with processing data streams is adapting to workload variations (e.g., due to daily cycles or the growth of the population of sources). Systems to ingest stream data typically parallelize it by sharding the incoming messages and events according to a routing key. Having the ability to parallelize ingestion is very effective, but future changes to the workload (which are very often unknown beforehand) might make the initial choice for the degree of parallelism inadequate for even short-term spikes. Consequently, the ability to scale by adapting parallelism according to workload while preserving important API properties, such as per-key order, is highly desirable to handle mission-critical workloads.
In this presentation, we explain how to accommodate changes to workloads in and with Pravega, an open source stream store built to ingest and serve stream data. Pravega primarily manipulates and stores segments (append-only byte sequences), forming streams by creating and composing segments, which it uses to enable the scaling of streams. Stream scaling in Pravega is automatic and transparent to the application, but such a change to the ingestion volume might also require the application to follow and scale its resources downstream (e.g., the operators of an Apache Flink job) to accommodate the new ingestion volume. Pravega signals such changes to the application so that it can react accordingly. The cooperation between Pravega and the downstream application is crucial for building an effective stream data pipeline.
If your business is heavily dependent on the Internet, you may be facing an unprecedented level of network traffic analytics data. How to make the most of that data is the challenge. This presentation from Kentik VP Product and former EMA analyst Jim Frey explores the evolving need, the architecture and key use cases for BGP and NetFlow analysis based on scale-out cloud computing and Big Data technologies.
How to scale your PaaS with OVH infrastructure?OVHcloud
ForePaaS has developed an “as-a-service” platform which lets you automate an infrastructure designed for analytical applications. The company has formed a cloud partnership with OVH in order to deliver flexible solutions for containerised and high-performance tools, such as Kunernetes and Docker.
Innovation in the Enterprise Rent-A-Car Data WarehouseDataWorks Summit
Big Data adoption is a journey. Depending on the business the process can take weeks, months, or even years. With any transformative technology the challenges have less to do with the technology and more to do with how a company adapts itself to a new way of thinking about data. Building a Center of Excellence is one way for IT to help drive success.
This talk will explore Enterprise Holdings Inc. (which operates the Enterprise Rent-A-Car, National Car Rental and Alamo Rent A Car) and their experience with Big Data. EHI’s journey started in 2013 with Hadoop as a POC and today are working to create the next generation data warehouse in Microsoft’s Azure cloud utilizing a lambda architecture.
We’ll discuss the Center of Excellence, the roles in the new world, share the things which worked well, and rant about those which didn’t.
No deep Hadoop knowledge is necessary, architect or executive level.
In this session we'll be looking at a number of different organisations who are on their big data cybersecurity journey with Apache Metron, we will take a look at the different usecases they are investigating, the data sources they used, the analytics they performed and in some cases the results they were able to find.
We'll also spend some time talking about the common themes in these projects, there are some common approaches to using Apache Metron as a phased project in a project, we'll review some of the common pitfalls and give some concrete suggestions about the things you should (and shouldn't) do when you're getting started.
Finally we'll try and tackle some of the key FAQ's that come up when people are first investigating the potential usage of Apache Metron in the real world based on over a year of interacting with customers and prospects as they look deeper into Apache Metron to see how it fits in to their cybersecurity portfolio.
Speaker
Dave Russell, Principal Solutions Engineer, Hortonworks
Kentik Detect Engine - Network Field Day 2017gvillain
Dan Ellis (CTO@Kentik) presents and discusses the technology and platform behind Kentik Detect Engine.
Links to the video of the presentation: https://kentik.com/nfd14
The process of streaming real-time data from a wide variety of machine data sources and entities can be very complex and unwieldy. Using an agent-based approach, Informatica has invented a new technique and open access product that makes this process much more user friendly and efficient, even when dealing with multiple environments such as Hadoop, Cassandra, Storm, Amazon Kinesis and Complex Event Processing.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Why is my Hadoop cluster s...Data Con LA
This talk draws on our experience in debugging and analyzing Hadoop jobs to describe some methodical approaches to this and present current and new tracing and tooling ideas that can help semi-automate parts of this difficult problem.
Lightning Fast Analytics with Hive LLAP and DruidDataWorks Summit
Cox Communications, one of the largest network providers in the U.S., is primarily focused on ensuring network security and providing better service to customers including:
• Real-time monitoring of IP security traffic to identify and alert the unusual network activities across interfaces within an organization
• Enrich the security team with capabilities to determine the source and destination of traffic, class of service, and the causes of congestion on NetFlow data
Challenges:
Data related to Network Security includes more granular streaming data. The major challenge lies in having an unified platform to perform data cleansing, transformation, analytics and reporting on this huge streaming datasets. With the growing network traffic, there is an exponential growth with the associated data. There is a need for Scalable framework to handle these datasets and derive useful information out of data. Along with data processing, data retrieval also plays a major role for better analysis. Currently Data processing was done in daily batch using manual python scripts and with implementation of custom data structures which were specific to use cases. There was a need for more generic and unified framework to provide automated real time end to end solution to obtain high performing, more granular business results.
Solution:
Automation of this process has opportunities on several fronts, notably, providing consistency, repeat-ability, and modernization of OLAP analytics on enterprise big data platform. Reports can be generated easier and faster with the underlying OLAP engine.
• Modern Big Data Platform provides the necessary tool and infrastructure to land, cleanse, process Real time stream data processing and enriching data using the ecosystem components like Spark, Kafka, Hive
• Impressively faster OLAP analytics using Hive LLAP and Druid Integration
• Simple and faster reporting using Superset
All of the necessary components under one roof of Hortonworks Hadoop Platform.
An end-to-end solution using Big Data platform produced faster and repeatable results with sub second query results.
Value Additions by above solution:
• Deliver ultra-fast SQL analytics that can be consumed from the BI tool by security engineering team to get accelerated business results
• Opportunity for business users to explore and visualize real time streaming datasets with integration for various data sources and build dashboards for different slices
• Capability to run BI queries in just milliseconds over 1TB dataset
• High granular permission model on security datasets that allow intricate rules on accessibility for the datasets
We will show a case study of moving from SQL Server DWH to Hadoop and Vertica. In this case study you will see implementation of Lambda Architecture with Hadoop, Vertica and MongoDB for real time statistics. We will start from showing the Legacy system and describe the problems we encountered. From there we will cover all the decision making on technology choosing for the current solution. We will finish by presenting the next steps of our Data Platform solution.
Enterprise IIoT Edge Processing with Apache NiFiTimothy Spann
April 5, 2018 IoT Fusion 2018 Conference in Philadelphia, PA hosted by Chariot Solutions. This talk is about Apache NiFi, MiniFi, Python, Deep Learning, NVidia Jetson TX1, Raspberry Pi, Apache MXNet, TensorFlow and how to run things at the edge and process in your big data center. http://iotfusion.net/session/ https://github.com/tspannhw/IoTFusion2018Talk
OSSF 2018 - Greg Olson of Open Source Sense - Building Mission- and Business-...FINOS
Today, open source dominates IT and communications infrastructure from the cloud to corporate data centers and the emerging edge. But open source with its rapid pace of development, frequent releases, and prolific patch set defies traditional practices and conditions for building mission- and business-critical software: stability, auditability and standards-compliance.
This talk will examine how companies address this "impedance mismatch" in consuming, integrating and deploying open source in applications that demand predictability and sustainability. In particular, the presentation will cover
(re)defining mission- and business-critical in the context of open source
technology-centric and process-based approaches to OSS-derived product life-cycles
forking and minimizing technical debt
building community visibility to support derived product roadmaps
If your business is heavily dependent on the Internet, you may be facing an unprecedented level of network traffic analytics data. How to make the most of that data is the challenge. This presentation from Kentik VP Product and former EMA analyst Jim Frey explores the evolving need, the architecture and key use cases for BGP and NetFlow analysis based on scale-out cloud computing and Big Data technologies.
How to scale your PaaS with OVH infrastructure?OVHcloud
ForePaaS has developed an “as-a-service” platform which lets you automate an infrastructure designed for analytical applications. The company has formed a cloud partnership with OVH in order to deliver flexible solutions for containerised and high-performance tools, such as Kunernetes and Docker.
Innovation in the Enterprise Rent-A-Car Data WarehouseDataWorks Summit
Big Data adoption is a journey. Depending on the business the process can take weeks, months, or even years. With any transformative technology the challenges have less to do with the technology and more to do with how a company adapts itself to a new way of thinking about data. Building a Center of Excellence is one way for IT to help drive success.
This talk will explore Enterprise Holdings Inc. (which operates the Enterprise Rent-A-Car, National Car Rental and Alamo Rent A Car) and their experience with Big Data. EHI’s journey started in 2013 with Hadoop as a POC and today are working to create the next generation data warehouse in Microsoft’s Azure cloud utilizing a lambda architecture.
We’ll discuss the Center of Excellence, the roles in the new world, share the things which worked well, and rant about those which didn’t.
No deep Hadoop knowledge is necessary, architect or executive level.
In this session we'll be looking at a number of different organisations who are on their big data cybersecurity journey with Apache Metron, we will take a look at the different usecases they are investigating, the data sources they used, the analytics they performed and in some cases the results they were able to find.
We'll also spend some time talking about the common themes in these projects, there are some common approaches to using Apache Metron as a phased project in a project, we'll review some of the common pitfalls and give some concrete suggestions about the things you should (and shouldn't) do when you're getting started.
Finally we'll try and tackle some of the key FAQ's that come up when people are first investigating the potential usage of Apache Metron in the real world based on over a year of interacting with customers and prospects as they look deeper into Apache Metron to see how it fits in to their cybersecurity portfolio.
Speaker
Dave Russell, Principal Solutions Engineer, Hortonworks
Kentik Detect Engine - Network Field Day 2017gvillain
Dan Ellis (CTO@Kentik) presents and discusses the technology and platform behind Kentik Detect Engine.
Links to the video of the presentation: https://kentik.com/nfd14
The process of streaming real-time data from a wide variety of machine data sources and entities can be very complex and unwieldy. Using an agent-based approach, Informatica has invented a new technique and open access product that makes this process much more user friendly and efficient, even when dealing with multiple environments such as Hadoop, Cassandra, Storm, Amazon Kinesis and Complex Event Processing.
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Why is my Hadoop cluster s...Data Con LA
This talk draws on our experience in debugging and analyzing Hadoop jobs to describe some methodical approaches to this and present current and new tracing and tooling ideas that can help semi-automate parts of this difficult problem.
Lightning Fast Analytics with Hive LLAP and DruidDataWorks Summit
Cox Communications, one of the largest network providers in the U.S., is primarily focused on ensuring network security and providing better service to customers including:
• Real-time monitoring of IP security traffic to identify and alert the unusual network activities across interfaces within an organization
• Enrich the security team with capabilities to determine the source and destination of traffic, class of service, and the causes of congestion on NetFlow data
Challenges:
Data related to Network Security includes more granular streaming data. The major challenge lies in having an unified platform to perform data cleansing, transformation, analytics and reporting on this huge streaming datasets. With the growing network traffic, there is an exponential growth with the associated data. There is a need for Scalable framework to handle these datasets and derive useful information out of data. Along with data processing, data retrieval also plays a major role for better analysis. Currently Data processing was done in daily batch using manual python scripts and with implementation of custom data structures which were specific to use cases. There was a need for more generic and unified framework to provide automated real time end to end solution to obtain high performing, more granular business results.
Solution:
Automation of this process has opportunities on several fronts, notably, providing consistency, repeat-ability, and modernization of OLAP analytics on enterprise big data platform. Reports can be generated easier and faster with the underlying OLAP engine.
• Modern Big Data Platform provides the necessary tool and infrastructure to land, cleanse, process Real time stream data processing and enriching data using the ecosystem components like Spark, Kafka, Hive
• Impressively faster OLAP analytics using Hive LLAP and Druid Integration
• Simple and faster reporting using Superset
All of the necessary components under one roof of Hortonworks Hadoop Platform.
An end-to-end solution using Big Data platform produced faster and repeatable results with sub second query results.
Value Additions by above solution:
• Deliver ultra-fast SQL analytics that can be consumed from the BI tool by security engineering team to get accelerated business results
• Opportunity for business users to explore and visualize real time streaming datasets with integration for various data sources and build dashboards for different slices
• Capability to run BI queries in just milliseconds over 1TB dataset
• High granular permission model on security datasets that allow intricate rules on accessibility for the datasets
We will show a case study of moving from SQL Server DWH to Hadoop and Vertica. In this case study you will see implementation of Lambda Architecture with Hadoop, Vertica and MongoDB for real time statistics. We will start from showing the Legacy system and describe the problems we encountered. From there we will cover all the decision making on technology choosing for the current solution. We will finish by presenting the next steps of our Data Platform solution.
Enterprise IIoT Edge Processing with Apache NiFiTimothy Spann
April 5, 2018 IoT Fusion 2018 Conference in Philadelphia, PA hosted by Chariot Solutions. This talk is about Apache NiFi, MiniFi, Python, Deep Learning, NVidia Jetson TX1, Raspberry Pi, Apache MXNet, TensorFlow and how to run things at the edge and process in your big data center. http://iotfusion.net/session/ https://github.com/tspannhw/IoTFusion2018Talk
OSSF 2018 - Greg Olson of Open Source Sense - Building Mission- and Business-...FINOS
Today, open source dominates IT and communications infrastructure from the cloud to corporate data centers and the emerging edge. But open source with its rapid pace of development, frequent releases, and prolific patch set defies traditional practices and conditions for building mission- and business-critical software: stability, auditability and standards-compliance.
This talk will examine how companies address this "impedance mismatch" in consuming, integrating and deploying open source in applications that demand predictability and sustainability. In particular, the presentation will cover
(re)defining mission- and business-critical in the context of open source
technology-centric and process-based approaches to OSS-derived product life-cycles
forking and minimizing technical debt
building community visibility to support derived product roadmaps
Ability to work in joint ventures and across divisions, culture and countries. Ability to integrate an understand of IP. Scientific expertise and business strategy. Ability to spur creativity while managing commercially. Expertise in functioning and decision making of regulatory requirement. Human resource skills to transform pharmaceutical segment. Dedicated support systems for web interface with real time information. Quality system must meet regulation in multiple markets plus clients internal standards.
#OSSPARIS19 - Understanding Open Source Governance - Gilles Gravier, Wipro Li...Paris Open Source Summit
Stratégie, risques liés à l'adoption de l'open source... Comment un modèle de gouvernance fort peut rendre votre parcours open source le plus efficace.
The key to a successful project is being able to quickly and effectively identify the quality of the application under test
For a multi-shore project this can be achieved with automation and test frameworks, an agile integrated testing model, and visibility and communication across the process.
Implementing distributed agile framework with
Scrum, XP & Effective Tools usage Dev ops. C. Padma presented this presentation during India Agile week 2015 - Bangalore
Embarking on a software development journey for startups can be a thrilling yet daunting experience. It's a path filled with twists and turns, and challenges that can make or break your success. But fear not, for there are solutions and proven strategies that can help you achieve your goal of successful product development. Join us on this exciting adventure as we explore the secrets to unlocking your startup's full potentia
Critical steps in Determining Your Value Stream Management SolutionDevOps.com
In order to increase your delivery velocity, you must find, identify and solve the bottlenecks of delivery. Value Stream management solutions capture metrics and processes helping guide your digital transformation journey.
Join Marc Hornbeek, Principal Consultant and Jeff Keyes from Plutora where they will discuss a methodology determining a value stream management solution for your organization. It will consist of critical steps including a Review of VSM Assessments, Future-State Value Stream Mapping, Road-Mapping VSM Transformation, and more. Following these steps provide a logical and comprehensive approach to determine a value stream management solution that fits for your organization’s requirements.
What will be learned:
WHY – is following steps for determining a VSM solution important?
HOW – are VSM solutions determined?
WHAT – is the expected outcome of a Value Stream Management solution recommendation?
Maximize Your Enterprise DevOps Efforts and Outcomes with Value StreamsDevOps.com
Enterprise software organizations need to modernize and transform their software development processes to gain and keep their competitive advantage by accelerating delivery of business value to customers to meet the market and customer demands. Cloud services, big data/analytics, mobile devices and apps, artificial intelligence, automation and other emerging innovations can help businesses achieve this success.
Join the editor in chief of DevOps, Alan Shimel, and Eric Robertson, Vice President Product Management from CollabNet, in a live chat session that will provide you with the valuable insights needed for Value Stream Management (VSM) for enterprises to stay ahead in today’s market. You will learn more about:
How VSM Relates to DevOps
How VSM Benefits Business and Technology Stakeholders
The Specific Advantages of VSM
How VSM Applies to the Emerging Internet of Things
The high-level product journey in the mind of PMs.
* Understanding the scope of the area and strategy pillars
* Approach to stakeholder management and governance
* Building the digital product roadmap
* Launching MVP
* Approach to optimize the product
* Measuring ROI
* Problem solved?
* What to build next
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
2. 1. The
primary
incumbents have you locked-into their
technology.
2. Increasing
data
sources
and
disparate
technologies
are
driving up
complexity.
3. New and increasing regulation is
forcing
change.
4. Massive cost cutting is required,
unlike
ever
before.
5. Resourcing constraints are limiting your ability to
innovate
and
address
technology
challenges.
The Problem
3. OpenMAMA as an Open Platform
Open Platform
enabling the
Capital
Markets
Open
source
Integration
layerExisting
pain
points
§ Increasing
Regulation
§ Increasing
TCO
§ Limited
resourcing
§ Incumbent
Lock-‐in
§ Increasing
Complexity
The Result
§ A flexible platform
§ Reduced TCO and time to market
§ Pool resources across community
§ Remove Incumbent Lock-In
§ Reduced Complexity
Compete
on
Innovation
Collaborate
on
the
Non-‐Differentiating
Technology
Multi
Vendor
Open
Platform
OpenMAMA
Vendor
Neutrality
Community
Collaboration
Innovation
4. What is MAMA and OpenMAMA?
§ The Open Middleware Agnostic Messaging API
§ Fully open source and licensed under the LGPL 2.1
§ Supports a variety of MOM platforms
§ Provides a High performance consistent abstraction layer
§ Governed by a group of industry stakeholders
§ Over 10 years production usage inherited from MAMA
§ Hosted through the Linux Foundation at www.openmama.org
5. Open source including
OpenMAMA
Open Source
3rd Party
Legend
3rd
Party commercial
Products
OpenMAMA (C)
Redline
Inrush
Qpid / AMQP
OpenMAMDA
(C++ / Java / .NET)
OpenMAMA
(C++/Java/.NET)
Entitlements
V5 Wirecache
Middleware Bridge
Wombatmsg
Message Bridge
Exegy API
Avis
UPA Bridge
to TREP
Bridge to
Bloomberg API
Informatica
LBM
SR Labs
Data Fabric
Data Feed Data Feed
Feed
Handler
Algo Algo Cache Monitoring …..
OpenMAMA Architecture
Solace Appliance
RAI
Technologies
ActivAPI
QpodMessage …
6. OpenMAMA Community Goals
Our Mission Statement is “OpenMAMA to become the industry-standard API for
all middleware systems and market data distribution platforms”
Our target market is “Market Data Distribution”.
Our core objectives are:
1. Breaking down monopolies and empowering users to chose solutions
without the risk of being locked into a specific vendor.
2. Take the friction and cost out of maintaining a market data distribution
platform while improving time to market.
3. Create an open market based on quality, functionality and innovation.
4. Encourage innovation and widen the portfolio of middlewares and
applications supported on OpenMAMA.
5. Ensure OpenMAMA reaches its full potential and remains a best of bread
solution.
7. Community Structure / Governance
§ Strategy and leadership
§ Voting members
§ Start new working groups
§ Project level decisions
§ Advocacy / Funding
§ Technical Direction
§ Design & implementation
§ QA & validation
§ Contributions
§ Community relationships
Linux Foundation
Project
Coordinator
Steering Committee
Technical Committee
Maintainer
Working
Group 3
Working
Group 1
Working
Group 2
Open Source
Community
Advisory
Group
§ User Representation
§ Non-Voting members
§ Project advocacy
8. Why Participate
1. Reduce
Incumbent
lock-‐in
and
ensure
the
integrity
of
the
project.
2. Enable
integration with
a
range
of
otherwise
incompatible
technologies.
3. Further
reduce
costs
by
pooling
your
resources
on
the
non-‐
differentiating
technology.
4. Drive
the
collaborative
platform
strategy
to
the benefit
of
your
business.
5. Guide
how
participating
vendors
they
leverage
the
platform for
their
commercial
interest.
9. Expected Commitment
Steering Committee / Advisory Group Expected Commitments SC AG
Linux Foundation Membership dues (dependant on company size) ✔
Steering Committee dues (waived for financial institutions) ✔
Sign Steering Committee Agreement ✔
Commit resource to technical committee ✔
Publicly advocate OpenMAMA ✔ ✔
Company Logo on OpenMAMA website ✔ ✔
Commit resource to working groups (on items of interest) ✔ ✔
Attend steering committee and appropriate working group meetings ✔ ✔
Contribute to OpenMAMA ✔ ✔
Encourage other financial institutions and vendors to support OpenMAMA ✔ ✔
Support OpenMAMA in products and services (vendors only) ✔ ✔
Note:
• Any new Steering or Advisory Group members must be approved by the existing Steering Committee.
• All OpenMAMA Steering Committee dues will be waived until further notice.
10. Benefits of Membership
Steering Committee & Advisory Group Benefits SC AG
Access to Steering Committee meetings ✔
Access to Advisory Group meetings and OpenMAMA events * ✔ ✔
Voting member (every member regarded as a peer, equal control of the project) ✔
Feed your business needs directly into the OpenMAMA roadmap ✔ ✔
Approve project roadmap prepared by Technical Committee ✔
Controls use of OpenMAMA mark and logo (can also post on company collateral) ✔
Sets working groups and project initiatives ✔
Approves and decides how project funds are used ✔
Influence how other vendors are leveraging / supporting OpenMAMA ✔ ✔
Unique insight into customer requirements /emerging industry trends (vendor) ✔ ✔
Collaborate / pool resources on non-differentiating technology ✔ ✔
Open collaboration with your customers, industry peers and competitors ✔ ✔
* Advisory Group members will typically be invited to OpenMAMArelated events, however will not be invited to committee off-
sites / portions of off-sites where the agenda includes project level / funding decisions.
11. Role of Steering Committee
q Steering Committee:
§ Leadership + Strategy + Funding
§ Project level decisions
§ Sets the direction, tone, and vision of the project
§ Approves project roadmap prepared by Technical Committee
§ Invites other companies to the Steering Committee
§ Approve rights to use OpenMAMA mark and logo
§ Recommends starting new interest groups, project initiatives
q Steering Committee Coordinator:
§ Elected by Steering Committee for a period of 1 year
§ This is not a ceremonial role
§ The Coordinator is expected to devote adequate time and energy to make the project successful.
• Driving Steering Committee activities, agenda, calls, action items, planning, follow-ups, etc.
• Interfacing between Steering and Technical Committee and interest groups
• Etc.
q Advisory Group: The purpose of the Advisory Group is to ensure organizations who add value to the
committee but cannot meet the criteria for full membership are not excluded.
12. Role of the Technical Committee
q Companies represented in the Steering Committee contribute
resources to Technical Committee
q Participants of the Technical Committee are responsible for:
§ Software architecture and implementation activities
§ Software QA and validation
§ Developing compliance test suites
§ Removing any technical inhibitors facing adoption of OpenMAMA
§ Reviewing submitted requests for new features, capabilities, prioritizing
them, aligning them with decided implementation roadmap
§ Defining compliance profile and implementation verification test suites
for 3rd party OpenMAMA-based stacks
§ Release plan and roadmap
§ Working with other open source projects on which OpenMAMA
supported applications and solutions depend
13. Role of the Maintainer
q What is a software maintainer?
§ A software maintainer is the software developer who acts as a gate keeper to the
OpenMAMA source code repository.
§ The software maintainer ensures that submitted source code meets set criteria
(functional, quality, security, etc) and decides based on such factors to accept source
code inclusion into the project source code tree, which will be built into a binary
package for distribution.
q Responsibilities of the Maintainer:
§ Scheduling work efforts based on prioritization of the Steering Committee and the
needs of the project
§ Managing resources and their work items
§ Setting the criteria for accepted / rejected code
§ Reviewing submitted code / accept and reject based on pre-defined rules
§ Tracking dependency issues
§ Notifying developers of source code changes that may affect their packages
§ Managing source code security issues
§ Working closely with technical team developing the source code
§ Working closely with QA team testing the source code
§ Dealing with reported bugs in a timely manner
§ Preparing binaries – packages of the source code
14. Role of the Linux Foundation
q About the Linux Foundation:
§ A technology non-profit organization with hundreds of member companies
§ A vendor neutral environment for companies to collaborate on various
upstream open source projects that act as enablers for differentiation and
other technologies
§ Multiple experiences and success stories enabling industry leaders to
shape markets with open source in areas of: Enterprise, HPC, Carrier
Grade Telecommunication, Consumer Electronics and Mobile Computing.
q What do the Linux Foundation provide?
§ Neutral home for collaboration
§ Credible hosting partner
§ Awareness and reach
§ Neutral reputation infrastructure
§ Technical infrastructure
§ Marketing platform
§ Legal infrastructure
§ Continuous project guidance
§ Guidance on governance
§ Guidance on FOSS licensing
15. OpenMAMA™ is a trademark of the Linux Foundation.
OpenMAMA™ may be used in accordance with the Linux Foundation Trademark Policy and will
approval by the OpenMAMASteering Committee.