Mainframe DevOps—the development challenge
Embracing change can be easier to say than do for mainframe organizations. Resource priority on the mainframe is given to production rather than dev and test. Current tooling, processes and practices may be cumbersome, linear, iterative and slow—but they will also be long-established.
New efficiencies from mainframe environments
By embracing modern development tooling and contemporary testing capability, organizations can achieve DevOps levels of efficiencies and see new returns on mainframe investments. Working collaboratively, teams can deliver more releases faster—and in parallel.
Efficiency, collaboration and flexibility—the pillars of mainframe DevOps
Adopting a DevOps culture and modern tooling can remove bottlenecks and enable parallel development at scale while preserving quality and process integrity and managing mainframe cost.
Easily? Yes, you read that right! It does not matter if you want to expand your team with full-stack developers or you just want developers for an ambitious project; if you follow some strategic methods, you can hire MEAN stack developers as quickly as you take a walk in a park (It’s a metaphor, Please DO NOT take it literally!) In this article, we will discuss what MEAN stack development is and how you can find and hire dedicated engineers and put action in your plans.
IBM Bluemix OpenWhisk: Interconnect 2016, Las Vegas: CCD-1088: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
This modern engineering technique has grown from good old SOA (Service Oriented Architecture) with features like REST (vs. old SOAP) support, NoSQL databases and the Event driven/reactive approach sprinkled in.
Microservices
The criticism
Evolutionary approach
Best practices
Create a Separate Database for Each Service
Rely on contracts between services
Deploy in Containers
Treat Servers as Volatile
Related techniques and patterns
Design patterns
Integration techniques
Deployment of microservices
Serverless - Function as a Service
Continuous Deployment
Related technologies
Microservices based e-commerce platforms
Technologies that empower microservices achitecture
Distributed logging and monitoring
Case Studies: Re-architecting the monolith
Top Companies to Outsource Software Migration and Modernization WorkMindfire LLC
Application modernization services address the legacy migration to new applications or platforms and integrate new functionality to offer the latest functions to the business. Modernization options include re-platforming, recoding, re-hosting, re-architecting, re-engineering, replacement, interoperability and retirement, and alteration to the application architecture as well.
Mainframe DevOps—the development challenge
Embracing change can be easier to say than do for mainframe organizations. Resource priority on the mainframe is given to production rather than dev and test. Current tooling, processes and practices may be cumbersome, linear, iterative and slow—but they will also be long-established.
New efficiencies from mainframe environments
By embracing modern development tooling and contemporary testing capability, organizations can achieve DevOps levels of efficiencies and see new returns on mainframe investments. Working collaboratively, teams can deliver more releases faster—and in parallel.
Efficiency, collaboration and flexibility—the pillars of mainframe DevOps
Adopting a DevOps culture and modern tooling can remove bottlenecks and enable parallel development at scale while preserving quality and process integrity and managing mainframe cost.
Easily? Yes, you read that right! It does not matter if you want to expand your team with full-stack developers or you just want developers for an ambitious project; if you follow some strategic methods, you can hire MEAN stack developers as quickly as you take a walk in a park (It’s a metaphor, Please DO NOT take it literally!) In this article, we will discuss what MEAN stack development is and how you can find and hire dedicated engineers and put action in your plans.
IBM Bluemix OpenWhisk: Interconnect 2016, Las Vegas: CCD-1088: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
This modern engineering technique has grown from good old SOA (Service Oriented Architecture) with features like REST (vs. old SOAP) support, NoSQL databases and the Event driven/reactive approach sprinkled in.
Microservices
The criticism
Evolutionary approach
Best practices
Create a Separate Database for Each Service
Rely on contracts between services
Deploy in Containers
Treat Servers as Volatile
Related techniques and patterns
Design patterns
Integration techniques
Deployment of microservices
Serverless - Function as a Service
Continuous Deployment
Related technologies
Microservices based e-commerce platforms
Technologies that empower microservices achitecture
Distributed logging and monitoring
Case Studies: Re-architecting the monolith
Top Companies to Outsource Software Migration and Modernization WorkMindfire LLC
Application modernization services address the legacy migration to new applications or platforms and integrate new functionality to offer the latest functions to the business. Modernization options include re-platforming, recoding, re-hosting, re-architecting, re-engineering, replacement, interoperability and retirement, and alteration to the application architecture as well.
CodeValue Architecture Next 2018 - Executive track dilemmas and solutions in...Erez PEDRO
Moderen Software projects are challenging to develop. Eran Stiller, Ronen Rubinfeld, and Erez Pedro from CodeValue show a method for conducting multidisciplinary product discovery.
[2015/2016] Software systems engineering PRINCIPLESIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Bridging the Gap: from Data Science to ProductionFlorian Wilhelm
A recent but quite common observation in industry is that although there is an overall high adoption of data science, many companies struggle to get it into production. Huge teams of well-payed data scientists often present one fancy model after the other to their managers but their proof of concepts never manifest into something business relevant. The frustration grows on both sides, managers and data scientists.
In my talk I elaborate on the many reasons why data science to production is such a hard nut to crack. I start with a taxonomy of data use cases in order to easier assess technical requirements. Based thereon, my focus lies on overcoming the two-language-problem which is Python/R loved by data scientists vs. the enterprise-established Java/Scala. From my project experiences I present three different solutions, namely 1) migrating to a single language, 2) reimplementation and 3) usage of a framework. The advantages and disadvantages of each approach is presented and general advices based on the introduced taxonomy is given.
Additionally, my talk also addresses organisational as well as problems in quality assurance and deployment. Best practices and further references are presented on a high-level in order to cover all facets of data science to production.
With my talk I hope to convey the message that breakdowns on the road from data science to production are rather the rule than the exception, so you are not alone. At the end of my talk, you will have a better understanding of why your team and you are struggling and what to do about it.
DevOps Implementation for Applications Solution - DatasheetTodd Erskine
The Implementation for Applications Solution helps you apply modern DevOps processes and tooling to an existing application. Over the course of the six-week engagement, the Microsoft Consulting Services team will evaluate the current application, migrate it to a Development/Test lab in Microsoft Azure, and use Azure DevOps Projects with Azure DevOps Services to create an automated build and release pipeline, work with your operations teams to drive adoption of Modern DevOps, and deliver up to six workshops to demonstrate the DevOps capabilities implemented during the engagement.
Migrating to Cloud: Inhouse Hadoop to Databricks (3)Knoldus Inc.
Modernize your Enterprise Data Lake to Serverless Data Lake, where data, workloads, and orchestrations can be automatically migrated to the cloud-native infrastructure.
Talk given at Equal Experts internal conference (gEEk) and talks about the patters associated with DevEx and the need for better platform engineering experience if we are expected to build great application engineer experiences.
Critical steps in Determining Your Value Stream Management SolutionDevOps.com
In order to increase your delivery velocity, you must find, identify and solve the bottlenecks of delivery. Value Stream management solutions capture metrics and processes helping guide your digital transformation journey.
Join Marc Hornbeek, Principal Consultant and Jeff Keyes from Plutora where they will discuss a methodology determining a value stream management solution for your organization. It will consist of critical steps including a Review of VSM Assessments, Future-State Value Stream Mapping, Road-Mapping VSM Transformation, and more. Following these steps provide a logical and comprehensive approach to determine a value stream management solution that fits for your organization’s requirements.
What will be learned:
WHY – is following steps for determining a VSM solution important?
HOW – are VSM solutions determined?
WHAT – is the expected outcome of a Value Stream Management solution recommendation?
Docker has taken the software world by storm, but what does it actually mean for enterprise IT teams? Containers along with microservices are components definitely worth investigating for any modern software delivery pipeline when considering speed, portability and scalability.
Understanding whether they are right for you, and how you could introduce them into your enterprise tool chain and delivery pipeline can be challenging.
This is an educational webinar where you'll learn:
What Docker means for your existing software delivery processes
Practical considerations to successfully implement containers as part of your enterprise release pipeline
Common pitfalls when considering microservices technology for enterprise applications
CodeValue Architecture Next 2018 - Executive track dilemmas and solutions in...Erez PEDRO
Moderen Software projects are challenging to develop. Eran Stiller, Ronen Rubinfeld, and Erez Pedro from CodeValue show a method for conducting multidisciplinary product discovery.
[2015/2016] Software systems engineering PRINCIPLESIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Bridging the Gap: from Data Science to ProductionFlorian Wilhelm
A recent but quite common observation in industry is that although there is an overall high adoption of data science, many companies struggle to get it into production. Huge teams of well-payed data scientists often present one fancy model after the other to their managers but their proof of concepts never manifest into something business relevant. The frustration grows on both sides, managers and data scientists.
In my talk I elaborate on the many reasons why data science to production is such a hard nut to crack. I start with a taxonomy of data use cases in order to easier assess technical requirements. Based thereon, my focus lies on overcoming the two-language-problem which is Python/R loved by data scientists vs. the enterprise-established Java/Scala. From my project experiences I present three different solutions, namely 1) migrating to a single language, 2) reimplementation and 3) usage of a framework. The advantages and disadvantages of each approach is presented and general advices based on the introduced taxonomy is given.
Additionally, my talk also addresses organisational as well as problems in quality assurance and deployment. Best practices and further references are presented on a high-level in order to cover all facets of data science to production.
With my talk I hope to convey the message that breakdowns on the road from data science to production are rather the rule than the exception, so you are not alone. At the end of my talk, you will have a better understanding of why your team and you are struggling and what to do about it.
DevOps Implementation for Applications Solution - DatasheetTodd Erskine
The Implementation for Applications Solution helps you apply modern DevOps processes and tooling to an existing application. Over the course of the six-week engagement, the Microsoft Consulting Services team will evaluate the current application, migrate it to a Development/Test lab in Microsoft Azure, and use Azure DevOps Projects with Azure DevOps Services to create an automated build and release pipeline, work with your operations teams to drive adoption of Modern DevOps, and deliver up to six workshops to demonstrate the DevOps capabilities implemented during the engagement.
Migrating to Cloud: Inhouse Hadoop to Databricks (3)Knoldus Inc.
Modernize your Enterprise Data Lake to Serverless Data Lake, where data, workloads, and orchestrations can be automatically migrated to the cloud-native infrastructure.
Talk given at Equal Experts internal conference (gEEk) and talks about the patters associated with DevEx and the need for better platform engineering experience if we are expected to build great application engineer experiences.
Critical steps in Determining Your Value Stream Management SolutionDevOps.com
In order to increase your delivery velocity, you must find, identify and solve the bottlenecks of delivery. Value Stream management solutions capture metrics and processes helping guide your digital transformation journey.
Join Marc Hornbeek, Principal Consultant and Jeff Keyes from Plutora where they will discuss a methodology determining a value stream management solution for your organization. It will consist of critical steps including a Review of VSM Assessments, Future-State Value Stream Mapping, Road-Mapping VSM Transformation, and more. Following these steps provide a logical and comprehensive approach to determine a value stream management solution that fits for your organization’s requirements.
What will be learned:
WHY – is following steps for determining a VSM solution important?
HOW – are VSM solutions determined?
WHAT – is the expected outcome of a Value Stream Management solution recommendation?
Docker has taken the software world by storm, but what does it actually mean for enterprise IT teams? Containers along with microservices are components definitely worth investigating for any modern software delivery pipeline when considering speed, portability and scalability.
Understanding whether they are right for you, and how you could introduce them into your enterprise tool chain and delivery pipeline can be challenging.
This is an educational webinar where you'll learn:
What Docker means for your existing software delivery processes
Practical considerations to successfully implement containers as part of your enterprise release pipeline
Common pitfalls when considering microservices technology for enterprise applications
Learn SQL from basic queries to Advance queriesmanishkhaire30
Dive into the world of data analysis with our comprehensive guide on mastering SQL! This presentation offers a practical approach to learning SQL, focusing on real-world applications and hands-on practice. Whether you're a beginner or looking to sharpen your skills, this guide provides the tools you need to extract, analyze, and interpret data effectively.
Key Highlights:
Foundations of SQL: Understand the basics of SQL, including data retrieval, filtering, and aggregation.
Advanced Queries: Learn to craft complex queries to uncover deep insights from your data.
Data Trends and Patterns: Discover how to identify and interpret trends and patterns in your datasets.
Practical Examples: Follow step-by-step examples to apply SQL techniques in real-world scenarios.
Actionable Insights: Gain the skills to derive actionable insights that drive informed decision-making.
Join us on this journey to enhance your data analysis capabilities and unlock the full potential of SQL. Perfect for data enthusiasts, analysts, and anyone eager to harness the power of data!
#DataAnalysis #SQL #LearningSQL #DataInsights #DataScience #Analytics
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Enhanced Enterprise Intelligence with your personal AI Data Copilot.pdfGetInData
Recently we have observed the rise of open-source Large Language Models (LLMs) that are community-driven or developed by the AI market leaders, such as Meta (Llama3), Databricks (DBRX) and Snowflake (Arctic). On the other hand, there is a growth in interest in specialized, carefully fine-tuned yet relatively small models that can efficiently assist programmers in day-to-day tasks. Finally, Retrieval-Augmented Generation (RAG) architectures have gained a lot of traction as the preferred approach for LLMs context and prompt augmentation for building conversational SQL data copilots, code copilots and chatbots.
In this presentation, we will show how we built upon these three concepts a robust Data Copilot that can help to democratize access to company data assets and boost performance of everyone working with data platforms.
Why do we need yet another (open-source ) Copilot?
How can we build one?
Architecture and evaluation
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
2. Abstract and learning objectives
In this whiteboard design session, you will work in a group to design a process Wide World
Importers (WWI) can follow for orchestrating and deploying updates to the application and the
deep learning model in a unified way. You will learn how WWI can leverage Deep Learning
technologies to scan through their vehicle specification documents to find compliance issues
with new regulations. You will design a DevOps pipeline to coordinate retrieving the latest best
model from the model registry, packaging the web application, deploying the web application
and inferencing web service. You will learn how to monitor the model's performance after it is
deployed so WWI can be proactive with performance issues. You will investigate the potential to
standardize the model format to ONNX to simplify inference runtime code (by enabling
pluggability of different models and targeting a broad range of runtime environments) and most
importantly to improve inferencing speed over the native model.
At the end of this whiteboard design session, you will be better able to design end-to-end
solutions that will fully operationalize deep learning models, inclusive of all application
components that depend on the model.
3. Step 1: Review the customer case study
Outcome
Analyze your customer needs.
Timeframe
15 minutes
7. Customer needs
• Want to understand the best practice process they should follow for end-
to-end deployment of deep learning models.
• Need a solution that addresses the management of the entire model
lifecycle, inclusive of monitoring the model in production and being able
re-train and re-deploy when a model needs updating.
• A process that avoids checking credentials into source control.
8. Customer objections
• We are not clear about the benefits that using ONNX might bring to our
current scenario and future scenario.
• It seems like data scientists deploy their models as web services from their
own python scripts, where as our developers are accustomed to using
Azure DevOps to deploy their web services. Can we really have one tool
that provides us build and deployment pipelines irrespective of whether
we are deploying a model or web application code?
• Obviously, we can't just have new models automatically deployed into
production. What kind of safeguards can we put in place?
10. Step 2: Design the solution
Outcome
Design a solution and prepare to present the solution to the target customer
audience in a 15-minute chalk-talk format.
Timeframe
60 minutes
Business needs
(10 minutes)
• Respond to questions outlined in your guide and be prepared to present your
solutions to others.
Design
(35 minutes)
• Design a solution for as many of the stated requirements as time allows.
Prepare
(15 minutes)
• Identify any customer needs that are not addressed with the proposed solution.
• Identify the benefits of your solution.
• Determine how you will respond to the customer’s objections.
• Prepare for a 15-minute presentation to the customer.
11. Step 3: Present the solution
Outcome
Present a solution to the target customer in a 15-minute chalk-talk format.
Timeframe
30 minutes (15 minutes for each team to present and receive feedback)
Directions
• Pair with another team.
• One group is the Microsoft team and the other is the customer.
• The Microsoft team presents their proposed solution to the customer.
• The customer asks one of the objections from the list of objections in the case study.
• The Microsoft team responds to the objection.
• The customer team gives feedback to the Microsoft team.
• Switch roles and repeat Steps 2-6.
13. Preferred target audience
• Francine Fischer, CIO of Wide World Importers
• The primary audience is the business decision makers and technology
decision makers. From the case study scenario, this would include the
Director of Analytics. Usually we talk to the infrastructure managers, who
report to the chief information officers (CIOs), or to application sponsors
(like a vice president [VP] line of business [LOB], or chief marketing officer
[CMO]), or to those that represent the business unit IT or developers that
report to application sponsors.
27. Preferred objections handling
1. We are not clear about the benefits that using ONNX might bring to our current
scenario and future scenario.
ONNX provides two potential benefits to WWI's scenario.
• ONNX provides a common model format that can be run within a wide range of
environments, without needing the libraries that were used to create the model. For
example, if a model is created with Keras, they would need neither Keras nor
TensorFlow to use the model for scoring. They would only need the ONNX Runtime.
This enables the ONNX model to be used in web services, in .NET applications, on IoT
devices and on mobile devices without additional effort.
• Because ONNX effectively re-compiles a model when converting to the ONNX format,
it may provide some optimizations that improve the scoring performance. In some
tests, improvements of 2x on average in the time taken to inference were experienced.
28. Preferred objections handling - continued
2. It seems like data scientists deploy their models as web services from their own
python scripts, where as our developers are accustomed to using Azure DevOps to
deploy their web services. Can we really have one tool that provides us build and
deployment pipelines irrespective of whether we are deploying a model or web
application code?
Yes. Both scenarios are supported by Azure DevOps and Azure Pipelines.
3. Obviously, we can't just have new models automatically deployed into production.
What kind of safeguards can we put in place?
You can create release pipelines that include pre-approvals that require a person to
approve a release before it is deployed into production.
29. Customer quote
"Not only is Azure enabling faster machine learning and deep learning, but
it is giving us powerful tools to manage the entire integration and
deployment process that we can use across development and data science
uniformly."
- Francine Fischer, CIO of Wide World Importers
Editor's Notes
The overall approach is to orchestrate continuous integration and continuous delivery Azure Pipelines from Azure DevOps. These pipelines are triggered by changes to artifacts that describe a machine learning pipeline, that is created with the Azure Machine Learning SDK. For example, checking in a change to the model training script executes the Azure Pipelines Build Pipeline, which trains the model and creates the container image. Then this triggers an Azure Pipelines Release pipeline that deploys the model as a web service, by using the Docker image that was created in the Build pipeline. Once in production, the scoring web service is monitored using a combination of Application Insights and Azure Storage.
When you first run a pipeline, Azure Machine Learning:
- Downloads the project snapshot to the compute target from the Blob storage associated with the workspace.
- Builds a Docker image corresponding to each step in the pipeline.
Downloads the docker image for each step to the compute target from the container registry.
- Mounts the datastore, if a DataReference object is specified in a step. If mount is not supported, the data is instead copied to the compute target.
- Runs the step in the compute target specified in the step definition.
- Creates artifacts, such as logs, stdout and stderr, metrics, and output specified by the step. These artifacts are then uploaded and kept in the user’s default datastore.