This document provides an agenda and overview for a TOGAF workshop on building enterprise architectures with System Architect. The agenda covers introducing TOGAF preliminary stages, business architecture, the business service layer, information systems architecture, application portfolio management, and analysis. It discusses modeling functions, processes, services, and applications. It also describes leveraging reference models, integrating with tools like Visio and Blueworks Live, and using the FEA Services Reference Model and TMForum models. The labs guide attending building out the different architecture components in System Architect.
Personally designed, officially accredited Lean IT Foundation courseware.
Trademarks are properties of the holders, who are not affiliated with courseware author.
Personally designed, officially accredited Lean IT Foundation courseware.
Trademarks are properties of the holders, who are not affiliated with courseware author.
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms.
Application Rationalization with LeanIXLeanIX GmbH
In this presentation from EA Connect Days 2018 in Bonn you learn about the benefits of Application Rationalization and how to optimize your Application Portfolio with LeanIX.
[DSC Europe 22] Overview of the Databricks Platform - Petar ZecevicDataScienceConferenc1
Databricks' founders caused a seismic shift in data analysis community when they created Apache Spark which has become a cornerstone of Big Data processing pipelines and tools in large and small companies all around the world. Now they've built a revolutionary, comprehensive and easy-to-use platform around Apache Spark and their other inventions, such as MLFlow and Koalas frameworks and most importantly the Data Lakehouse: a concept of fusing data warehouse and data lake architectures into a single versatile and fast platform. Technical foundation for Databricks Data Lakehouse is Delta Lake. More than 7000 organizations today rely on Databricks to enable massive-scale data engineering, collaborative data science, full-lifecycle machine learning and business analytics. Come to the talk and see the demo to find out why.
How to leverage Enterprise Architecture in a regulated environmentLeanIX GmbH
In his presentation at EA Connect Days 2018 in Bonn, Andreas Weinberger, Head of IT & Head of IT Governance Committee at Bank Donner & Reuschel, explored how Enterprise Architecture can be used to support business departments in dealing with regulations regarding information and information technology. It helps in generating a common understanding and providing a central pool of information. As a means of communication it also enhances the learning process and capturing of new information. He also demonstrated a way of working with LeanIX in regard to regulations like GDPR and BAIT/VAIT (COBIT).
IT Strategy I Best Practices I NuggetHubRichardNowack
IT strategy is a plan of action to create an information technology capability for maximum, and sustainable value for an organization. In this business best practice slide deck you learn how to assess and setup an IT strategy and a transformation plan.
We provide you with the following best practices:
- IT Strategy Definition and Introduction
- IT Strategy Frameworks
- IT Strategy Approaches and Transformation
Your Challenge
It is difficult to start the project, engage the right people, and find the necessary requirements to drive the value of an enterprise architecture operating model.
It is challenging to navigate the common enterprise architecture (EA) frameworks and right-size them for your organization.
The EA practice may struggle to effectively collaborate with the business when making decisions, resulting in outcomes that fail to engage stakeholders.
Our Advice
Critical Insight
The benefits of an EA program are only realized when all components of the operating model enable the achievement of the program goals and objectives. Many times organizations overplay the governance card while ignoring the motivational aspects that can be addressed through the organization's structure or stakeholder relations.
Info-Tech’s methodology ensures that all components of an EA operating model are considered to optimize the performance of the EA program.
Impact and Result
Place and structure your EA team to address the needs of stakeholders and deliver on the previously created strategy.
Create an engagement model by understanding each relevant process of COBIT 5 and make stakeholder interaction cards to initiate conversations.
Recognize the need for governance and formulate the appropriate boards while considering various policies, principles, and compliance.
Develop a unique architecture development framework based on best-practice approaches with an understanding of the various architectural views to ensure the creation of a successful process.
Build a communication plan and roadmap to efficiently navigate through enterprise change and involve the necessary stakeholders.
My objective with this presentation is to introduce the key frameworks and standards that provide practical guidance when tackling an EA project or implementing an EA capability.
There is currently not a universality accepted definition of EA and therefore it is important to but context to the presentation, so before we start discussing standards and frameworks that address the challenges, I want to take a minute to state my definition of Enterprise Architecture.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
Best Practices for Migrating your Data Warehouse to Amazon RedshiftAmazon Web Services
You can gain substantially more business insights and save costs by migrating your existing data warehouse to Amazon Redshift. This session will cover the key benefits of migrating to Amazon Redshift, migration strategies, and tools and resources that can help you in the process. We’ll learn about AWS Database Migration Service and AWS Schema Migration Tool, which were recently enhanced to import data from six common data warehouse platforms.
Application Rationalization with LeanIXLeanIX GmbH
In this presentation from EA Connect Days 2018 in Bonn you learn about the benefits of Application Rationalization and how to optimize your Application Portfolio with LeanIX.
[DSC Europe 22] Overview of the Databricks Platform - Petar ZecevicDataScienceConferenc1
Databricks' founders caused a seismic shift in data analysis community when they created Apache Spark which has become a cornerstone of Big Data processing pipelines and tools in large and small companies all around the world. Now they've built a revolutionary, comprehensive and easy-to-use platform around Apache Spark and their other inventions, such as MLFlow and Koalas frameworks and most importantly the Data Lakehouse: a concept of fusing data warehouse and data lake architectures into a single versatile and fast platform. Technical foundation for Databricks Data Lakehouse is Delta Lake. More than 7000 organizations today rely on Databricks to enable massive-scale data engineering, collaborative data science, full-lifecycle machine learning and business analytics. Come to the talk and see the demo to find out why.
How to leverage Enterprise Architecture in a regulated environmentLeanIX GmbH
In his presentation at EA Connect Days 2018 in Bonn, Andreas Weinberger, Head of IT & Head of IT Governance Committee at Bank Donner & Reuschel, explored how Enterprise Architecture can be used to support business departments in dealing with regulations regarding information and information technology. It helps in generating a common understanding and providing a central pool of information. As a means of communication it also enhances the learning process and capturing of new information. He also demonstrated a way of working with LeanIX in regard to regulations like GDPR and BAIT/VAIT (COBIT).
IT Strategy I Best Practices I NuggetHubRichardNowack
IT strategy is a plan of action to create an information technology capability for maximum, and sustainable value for an organization. In this business best practice slide deck you learn how to assess and setup an IT strategy and a transformation plan.
We provide you with the following best practices:
- IT Strategy Definition and Introduction
- IT Strategy Frameworks
- IT Strategy Approaches and Transformation
Your Challenge
It is difficult to start the project, engage the right people, and find the necessary requirements to drive the value of an enterprise architecture operating model.
It is challenging to navigate the common enterprise architecture (EA) frameworks and right-size them for your organization.
The EA practice may struggle to effectively collaborate with the business when making decisions, resulting in outcomes that fail to engage stakeholders.
Our Advice
Critical Insight
The benefits of an EA program are only realized when all components of the operating model enable the achievement of the program goals and objectives. Many times organizations overplay the governance card while ignoring the motivational aspects that can be addressed through the organization's structure or stakeholder relations.
Info-Tech’s methodology ensures that all components of an EA operating model are considered to optimize the performance of the EA program.
Impact and Result
Place and structure your EA team to address the needs of stakeholders and deliver on the previously created strategy.
Create an engagement model by understanding each relevant process of COBIT 5 and make stakeholder interaction cards to initiate conversations.
Recognize the need for governance and formulate the appropriate boards while considering various policies, principles, and compliance.
Develop a unique architecture development framework based on best-practice approaches with an understanding of the various architectural views to ensure the creation of a successful process.
Build a communication plan and roadmap to efficiently navigate through enterprise change and involve the necessary stakeholders.
My objective with this presentation is to introduce the key frameworks and standards that provide practical guidance when tackling an EA project or implementing an EA capability.
There is currently not a universality accepted definition of EA and therefore it is important to but context to the presentation, so before we start discussing standards and frameworks that address the challenges, I want to take a minute to state my definition of Enterprise Architecture.
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
You’ve heard the marketing buzz, maybe you have been to a workshop and worked with some Spark, Delta, SQL, Python, or R, but you still need some help putting all the pieces together? Join us as we review some common techniques to build a lakehouse using Delta Lake, use SQL Analytics to perform exploratory analysis, and build connectivity for BI applications.
How can Oracle Forms (or other legacy) applications be modernized to fit in a contemporary IT architecture? Trends, concepts and technologies are discussed.
The latest distributed system utilizing the cloud is a very complicated configuration in which the components span a plurality of components. Applications for customers are part of products, and service quality targets directly linked to business indicators are needed. Legacy monitoring system based on traditional system management is not linked not only to business indicators but also to measure service quality. Google advocates the idea of site reliability engineering (SRE) and introduces efforts to measure quality of service. Based on the concept of SRE, the service quality monitoring system collects and analyzes logs from various components not only application codes but also whole infrastructure components. Since very large amounts of data must be processed in real time, it is necessary to design carefully with reference to the big data architecture. To utilize this system, you can measure the quality of service, and make it possible to continuously improve the service quality.
Recent Gartner and Capgemini studies predict only around 25% of data science projects are successful and only around 15% make it to full-scale production. Of these, many degrade in performance and produce disappointing results within months of implementation. How can focusing on the desired business outcomes and business use cases throughout a data science project help overcome the odds?
Where to Begin? Application Portfolio Migration - Miha Kralj, Principal Consultant, AWS
Application portfolio assessment is a technique used at the beginning of enterprise application migration process. It helps migration team to gather, analyse and understand their app portfolio before deciding on priorities and sequences of application migration. This session will present the app assessment process, the most common migration strategies and tools, and the placement of application portfolio migration in a complete IT Transformation process.
Clearly Defining what is a (SOA) Business Service? (as of 2010)Akiva Marks
This presentation is from 2010. Although SOA was an 8 year old technology at the time, many enterprise IT organizations were still struggling with the basic question of “what is a (web) service?” While the question sounds basic, initially services were focused on ‘whatever application functions could be exposed’ – a programmer oriented paradigm. How to tie services to business and enterprise value, or what to focus upon to get practical value out of integration, was a major concern.
Planning your move to the cloud: SaaS Enablement and User Experience (Oracle ...Lucas Jellema
IT organizations face many challenges when integrating cloud applications with existing on-premises applications, and keeping a cohesive user interface is among the top. You want content from one application displayed in another, consolidated views, easy navigation between apps, and a consistent user experience for all. This session highlights a number of Oracle tools and best practices to help you find your path to the cloud.
This presentation focuses on the inevitable journey to the cloud and up the stack, the advent of (a plethora of) SaaS applications and the challenges around integrating these applications at data & event level and at User Experience level. The key questions and challenges are identified, a number of cases is illustrated and the key pieces from the Oracle PaaS portfolio for dealing with these challenges are highlighted.
Choosing the Right Business Intelligence Tools for Your Data and Architectura...Victor Holman
Watch video presentation and get a FREE performance management kit at
http://www.lifecycle-performance-pros.com
This presentation takes you through the steps of understanding your business intelligence needs and identifying the right tools for you. We discuss the different types of BI tools. We to discuss the criteria for selecting each type of tools. We to discuss popular Business Intelligence vendors and how to rate them. And we are going to discuss the job functions and responsibilities for a typical BI implementation
SharePoint as a Business Platform Why, What and How? – No Codedox42
"SharePoint as a Business Platform
Why, What and How? – No Code"
Im Vortrag von Jean-François Saint-Pierre von Evolusys erfahren Sie mehr über das nahtlose Zusammenspiel von SharePoint und dox42.
24.09.2014, Swiss SharePoint Club Genf
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
2. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
3. The NEED for Enterprise Architecture
• Customer quote (paraphrased):
• ‘We get asked, on a regular basis -- usually at the
last minute -- for artifacts that describe the business.
• The information is:
– Served up in Powerpoints and Excel spreadsheets
– Assembled in a scramble
– There is no correlation between the artifacts
– We don’t know where the documents came from,
who owns them, how reliable the information is, nor
what it doesn’t show, etc’
• Need a delivery mechanism for this
information so it is served up in a
self-serve manner
4. The NEED for Enterprise Architecture
• Customer quote (paraphrased):
• ‘With disruptive technologies, such as the cloud and
mobile – the EA team – even at a formerly staid
Health Care company – is being put in the
spotlights (or headlights?)
• CEO wants to know:
– How can company react to demand for mobile and
cloud technologies, and utilize these technologies for
advantage
– What are the solution alternatives, and cost
– What is impact to current business
– What is the risk to current business
• CEO wants the information ASAP
5. Reference
Models
EA Operations Center
EA
Governance
Establish Sources of Record
Transition
Planning &
Roadmaps
Project
Prioritization
& Planning
These are the things we should do
These are our
roadmaps
Harvest
Enterprise Architecture
Core Business
Processes
Apps
Data
Security Tools
Disparate Spreadsheets Multiple Data Sources
Leverage what
everyone else is doing
Cause-effect Analysis,
Heatmaps, Business
Analytics & Dashboards
Clean up sources of record
Harvest
SA/Pub,
SA/XT,
SA/DM
Solution
Design
CMDB
tools
sniffing
network
Business
Capabilities
Infrastructure
Apps
Data
Harvest
Standards
databases
APM tools
6. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
7. Content Framework
• Core
Concepts
• Extensions
Core Services
Data
Modeling
Motivation Governance
Process
Modeling
Infrastructure
Consolidation
Core
10. How EA
• Establish Vision of EA & Stages of
Success
• Start Small – Establish Project
where you can establish deadline
and ROI
• Grow the EA
– Show value in analytics
– Show value in cleaning up sources of
record
– Show value in visibility
– “I want some of that”
– EA becomes systematic
13. Perform Labs 1 & 2: TOGAF Preliminary Phase
• Lab 1
• Make sure SA & the Workshop Folder are Available
• Lab 2
• Create the EA Repository
• Establish the EA Metamodel
16. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
17. Business Architecture within Context of EA – TOGAF
• TOGAF Metamodel
• We want to
understand
1. Capabilities
2. Functions
3. Processes (that
orchestrate Functions)
4. Services (that
encapsulate Functions)
5. People (that own
Functions & Apps)
6. Applications (enabling
Services or Functions)
7. Information (data)
8. Technologies (used by
Applications & Services)
9. Locations (of Apps,
Technologies & People)
1
2
3
4
5
6
9
8
A
7
18. Functions, Processes, Services, Applications
• Function = something that an organization does.
• According to TOGAF, a function "delivers business capabilities
closely aligned to an organization, but not explicitly governed by
the organization."
• Process = how the organization performs a function.
• There are many cross function processes, and cross
organizational processes.
• According to TOGAF, a process "is a flow of interactions between
functions and services that cannot be physically deployed. All
processes should describe the flow of execution for a function and
therefore the deployment of a process is through the function it
supports;
• i.e., an application implements a function that has a process,
not an application implements a process."
19. Functions, Processes, Services, Applications
• Follow the purple crayon:
• Function is realized by Process.
• Function is bounded by a Business
Service which may be automated
by an IS Service, which is further
implemented by an Application.
• In this workshop we are not
specifically modeling Business
Services or Information System
(IS) Services; we use the direct
relation between Function and
Application.
20. Functions, Processes, Services, Applications
• From TOGAF spec:
• Function encapsulates
Business Service
• Business Service
encapsulates
Functions
• Business Service can
be performed by
Information System
(IS) Service
• IS Service part of
Application
Component
21. Reference
Models
Business Process Modeling (BPMN) Input to EA
RSA
Enterprise Architecture
Mobile Loans
Approval
Process
Delivery
Business Process
Modeling, Capturing
& Redesigning
Leverage what
everyone else is doing
Decision Making Platform:
Cause-effect Analysis,
Heatmaps, Business
Analytics & Dashboards
EA Transition Planning
EA Cockpit SA/Pub,
SA/XT,
SA/DM
Visio Models
•Import w SA-Visio Mapper Utility
BPMN Modeling in SA
BPMN Modeling in SA/XT
3.3.1
3.3.1
Solution
Design
System
Architect
IBM BlueWorksLive
3.3.2
22. Perform Lab 3.1: Business Architecture
• Lab 3.1
1. Import business Functions
2. Auto-Build Functional Decomposition Diagram
3. Add New Functions
4. Understand Function Owners
24. SA/XT – Live BPMN Modeling on Web Browser
• Model on BPMN near-zero
footprint web interface
• Only JavaScript to enable
• Model and save directly in
SA repository
• Can use SA rich-client on
same repository at same
time
Storage
Collaboration
Query
Discovery
Administration:
Users, projects,
process
Best Practice Processes
Presentation:
Mashups
Future
IBM
Capabilities
Product
& Project
Management
Collaborative
Lifecycle
Management Engineering
& Software
Tools
Business
Planning &
Alignment
Your
existing
capabilities
3rd-Party
Jazz
Capabilities
Compliance
& Security
Storage
Collaboration
Query
Discovery
Administration:
Users, projects,
process
Best Practice Processes
Best Practice Processes
Presentation:
Mashups
Future
IBM
Capabilities
Future
IBM
Capabilities
Product
& Project
Management
Collaborative
Lifecycle
Management Engineering
& Software
Tools
Engineering
& Software
Tools
Business
Planning &
Alignment
Business
Planning &
Alignment
Your
existing
capabilities
Your
existing
capabilities
3rd-Party
Jazz
Capabilities
Compliance
& Security
SQL Server or
Oracle Database
System
Architect Server
Microsoft IIS
Server
System
Architect
System
Architect XT
OSLC
BPMN modeling in SA/XT
B
25. SA – Visio Integration through Mapper Utility
• SA-Visio Mapper Utility available for
free on DeveloperWorks
• Map any Visio diagram to System
Architect
• Mapper Utility reads Visio diagram
and provides side-by-side mapping
interface to user
Landscape diagram in Visio
Diagram imported into SA
C
26. IBM BlueworksLive
• Easy web interface
• Engage line of business users
in process discovery,
documentation, & simple
process automation
• Import/Export:
• Import Visio XML diagram
format (.vdx)
• Bidirectional support for BPMN
2.0 interchange
• Bidirectional support for XPDL
2.1
• Generate IBM Websphere
Business Modeler XML
(Version 7.0)
• Generate to Microsoft Excel
(.xls)
D
27. SA-BlueWorksLive Integration via BPMN 2.0 Interchange
IBM BlueWorksLive to
SA via BPMN 2.0
Interchange
Bi-directional
BPMN in BlueWorksLive
1
2 Export in
BlueWorksLive
5
3 Export Choices in
BlueWorksLive
BPMN model in SA after import
4 BPMN Import into SA
D
28. Use of Reference Models to Jump Start EA Effort
IBM Is a member of the APQC.org,
and has helped develop several
industry process frameworks,
including:
Aerospace & Defense
Automotive
Banking
Broadcasting
Consumer Products
Electric Utilities
Petroleum Downstream
Petroleum Upstream
Pharmaceutical
Telecommunications
Pre-established 5-layer process
framework can be import into
modeling tools
Example:
APQC
Process
Framework
for Banking
which IBM
helped
develop
E
29. Using APQC
• According to APQC's John Tessmer,
"The PCF was originally envisioned
and is still based on the premise that
it is a classification system or
taxonomy of business processes,
similar to how a dictionary
classifies words. The categorization
does not imply that organizations
structure their internal operations
according to the taxonomy; it merely
provides a facility to help define
processes so that they can be
understood and referenced in a
consistent manner. Similarly, a
dictionary won't instruct you in
proper grammar or sentence
construction — you would have to
refer to a style guide for that."
30. Perform Labs 3.2 and 3.3: Business Architecture
• Lab 3.2
1. Examine APQC Process Framework for Banking
• Lab 3.3
1. Model a Process Flow with BPMN 2.0
2. Utilize BPMN 2.0 Interchange to Import BlueWorks Flow
3. Link Processes with Functions they Orchestrate
4. Create Function/Process Parent/Child Navigation Links
31. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
32. Business Service
• A Business Service can be
manual or automated
• It provides governed interface to
access Functions
• It supports business Processes
• It can be implemented by an
Information System (IS) Service
-- a fully automated service,
similar to what the industry might
call a SOA service
33. FEA Services Reference Model (SRM)
• US Federal Enterprise Architecture (FEA) Service Reference Model
(SRM)
• Part of the Consolidated Reference Model
• Contains a taxonomy of all of the services performed by all agencies of
the United States government, as specified by the US Office of
Management and Budget (OMB).
• Agencies must show that any system they wish funded support a
service in the SRM
• The commercial industry has adopted the SRM as a guide to what their
business is doing/should be doing
34. FEA Services Reference Model (SRM)
Best Practice:
• After importing the SRM, the Enterprise Architect can delete
Business Services not used in the organization, and add
Business Services that are used.
• The SRM is used to jump start the EA effort
35. FEA Services Reference Model (SRM)
For this Workshop:
• Metamodel of the SRM we use in workshop is modified from
SRM provided by the US government.
• In workshop metamodel, decomposition property of the
Service definition has been utilized to provide hierarchy of
services.
• In the US government's SRM, the metamodel starts with
highest level Service Domain, then breaks down into Service
Type, and then Service Component (lowest level). The SA
FEA Reference model add-in allows you to import that SRM
(provided by the US government via an xml file on
whitehouse.gov), align your architecture with it, & produce
reports mandated by OMB
36. Perform Lab 4: Business Service Layer
1. Import Modified Version of FEA Services Reference Model (SRM)
2. Add Business Service to the Architecture
37. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
38. Logical vs Physical App Components (and Tech Components)
• Optional to model Logical Application
Components
• Example:
– Sales Licensing tool
– Web development tool
– Enterprise Architecture tool
• Enables better analysis
• Understand how many Process
Modeling tools you have
• Understand why a tool is being used
(Photoshop for Web Dev)
• In SA, Application Component
definition has toggle for “Physical”; if
not toggled, it is logical app
component
• Note in TOGAF metamodel, Logical
App Component not connected to
Logical Tech Component
39. Application Component (Logical)
Enterprise Architecture tool
Requirements Management tool
Software Design tool
Change Management
Collaborative Development tool
Application Component (Physical)
System Architect
DOORS
Rational Software Architect
Rational Team Concert
Examples of Logical & Physical App & Tech Components
Technology Component (Logical)
Relational Database
Operating System
Mobile Operating System
Web Browser Script Language
Technology Component (Physical)
Microsoft SQL Server database
Windows
Android
JavaScript
40. Use of Application, Data, and Technical Reference Models
• Application, Data, & Process
Reference Models:
• Telecommunications Forum Telecom
Applications Map (TAM) of TMForum
– Used by Telecom & other
industries
– Also: SID – Standard Information
Database
– Also: eTom – Business Process
Framework
• IBM & System Architect are TMForum
Certified
– Encyclopedia provided prepopulated with
TAM, eTOM, & SID
– IBM provides Telecom Catalog Order
Management Solution – maps IBM
solutions to SID, eTOM, TAM
IBM Catalog Driven
Order Management
Solution
Mapping to SID
Mapping to eTOM
41. Perform Lab 5: Information Systems Architecture
• Lab 5
1. Import Spreadsheet of Physical Applications
2. Visualize Physical Application Interfaces
3. Import Pre-Built Explorer Reports & Analytic Collections
4. Visualize Application Interfaces
5. Add an Application
• Lab 6
1. Utilize TMForum TAM for Logical Application Reference Model
2. Map Logical to Physical Apps
3. Build Report for Functions, Logical Apps, Physical Apps
1. Generate Report to HTML
2. Generate Report to Grid
3. Generate ‘Partial’ Report
4. Model Data Flows between Logical Apps (Optional)
42. Report to HTML
• Functions, their
Logical
Applications, and
their related
Physical
Applications
41
43. Report to Grid
• Functions, their Logical
Applications, and their
related Physical
Applications
42
44. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
45. Application Portfolio Analysis & Management
• Assess Applications using APM tools
• Cost of application
• Invest, Divest, Maintain
• Dev bandwidth
• What are the business priorities?
• What is working well?
• What is unnecessary, redundant or
obsolete?
• Where can costs be cut?
• Query workforce – example: vote on
usefulness and usability of applications used
48. • Lab Intro
• TOGAF Preliminary Stage
• Business Architecture
• Business Service Layer
• Information Systems Architecture
• Application Portfolio Management
• Analysis
Agenda
49. • Use the architecture to answer questions:
• Budget Constraints
– If a System is retired, what Capabilities are affected?
– How many projects are underway to supply similar capabilities?
– If I want to field a new system, what other systems do I currently have that
are similar to it, based on functions they perform?
• Disaster Recovery
– If a System is put out of service, what Capabilities are affected?
• Risk
– If an operating system is changed, what Capabilities could be affected?
Gap Analysis and Cause-Effect Analysis
53. • Use the architecture to answer questions:
• Budget Constraints
– How can we reduce costs to meet budget constraints but still provide
needed Capabilities
– What are the costs associated with Activities and Systems that
support a Capability?
– Unintended effects of cost reduction – if we virtualize servers, what
Apps are affected; what Activities are affected; what Capabilities are
put at risk?
– Lots of ways to calculate costs: Activity Based Costing, Cost of
Purchased Systems, Maintenance, Manpower, etc
• Disaster Recovery
– What capabilities are at risk if different systems go down at certain
locations?
– Is there a disaster recovery plan in place for important systems?
Analytics and Heatmaps
56. End of Current Workshop Exercises
• The Next Sections Are for Theory Only
57. TOGAF Metamodel Extensions for Infrastructure
• Metamodel Additions Needed to Model Application and IT
Portfolio to Version and Instance Level
Physical Application Component Instance
Physical Technology Component Type
Physical Application Component Version
<<Abstract>>
Configuration Item
<<Phy sical>>
Application Component
<<Phy sical>>
Technology Component
<<Logical>>
Technology Component
<<Logical>>
Application Component
Operating Sy stem Instance
Physical Data Component
Architecture Building Block
Platform Service
IS Service
Logical Data Component Solution Building Block
Data Entity
Location
Vendor
Database Instance
Device Instance
Server Instance
deployed instance of
is extended by
is extended by
is realized by
deployed instance of
has deployable version
hosted at
is implemented by
deployable version of
decomposes
implements
supplies
encapsulates
operates on
is realized by
encapsulates
supports
deployed instance of
hosted at
provided by
58. TOGAF Metamodel Extensions for Infrastructure
• Configuration Item =
A physical device or
executable software
that is part of an
enterprise’s current
infrastructure.
• Is abstract
• Is instantiated by
• Physical Application
Instance
• Server Instance
• Database Instance
• Device Instance
Physical Technology Component Type
Physical Application Component Version
Physical Application Component Instance
<<Abstract>>
Configuration Item
<<Physical>>
Technology Component
Operating System Instance
<<Physical>>
Application Component
Architecture Building Block
<<Logical>>
Technology Component
<<Logical>>
Application Component
Physical Data Component
Logical Data Component Solution Building Block
Database Instance
Device Instance
Server Instance
Platform Service
Location
IS Service
Data Entity
Vendor
deployed instance of
is extended by
is extended by
is realized by
deployed instance of
has deployable version
hosted at
is implemented by
deployable version of
decomposes
implements
supplies
encapsulates
operates on
is realized by
encapsulates
supports
deployed instance of
hosted at
provided by
59. Application Component (Logical)
Enterprise Architecture tool
Requirements Management tool
Software Design tool
Change Management
Collaborative Development tool
Application Component (Physical)
System Architect
DOORS
Rational Software Architect
Rational Team Concert
Physical Application Component Version
System Architect 11.4.2.5
DOORS 10.1
RSA 8.0
Physical Application Component Instance
System Architect 11.4.2.5 License 1
TOGAF Metamodel Extensions for Infrastructure
Technology Component (Logical)
Relational Database
Operating System
Mobile Operating System
Web Browser Script Language
Physical Technology Component Type
Microsoft SQL Server database
Windows
Android
JavaScript
Technology Component (Physical)
Microsoft SQL Server 2008 R2
Microsoft Windows 7
JavaScript 4
Database Instance
SQL Server 2008 R2 Running Instance
Operating System Instance
Windows 7 Running Instance
Device Instance
Lenovo Laptop S/N 1234
60. TOGAF 9.1 Extensions for Infrastructure by IBM
59
Physical Application
Instance
Physical Application
Version
Operating System
Instance
Device Instance
Technology
Component (Physical)
Lenovo W510
SA 11.4.3.2 -- L1234
Lenovo W510 S#1234
Windows 7 – L1234
Application
Component (Phys)
SA 11.4.3.2
SA
Application
Component (Log)
EA Tool
Technology
Component (Physical)
Windows 7
Physical Technology
Component Type
Lenovo Laptop
Physical Technology
Component Type
Windows
Technology
Component (Logical)
Technology
Component (Logical)
Laptop
Operating System
Base TOGAF 9.1
IBM extensions for infrastructure
61. Simplified TOGAF 9.1 Extensions – SA 11.4.3.2
60
Physical Application
Instance
Physical Application
Version
Operating System
Instance
Device Instance
Technology
Component (Physical)
Lenovo W510
SA 11.4.3.2 -- L1234
Lenovo W510 S#1234
Windows 7 – L1234
Application
Component (Phys)
SA 11.4.3.2
SA
Application
Component (Log)
EA Tool
Technology
Component (Physical)
Windows 7
Physical Technology
Component Type
Lenovo Laptop
Physical Technology
Component Type
Windows
Technology
Component (Logical)
Technology
Component (Logical)
Laptop
Operating System
,<Operating System>
,<Device>
Base TOGAF 9.1
IBM extensions for infrastructure
62. 61
Tivoli Application Dependency Discovery Manager (TADDM)
Application Mapping with
Dependencies
– Agent-less and Credential-free
– Discover interdependencies
between Applications,
middleware, servers and
network components)
64. System Architect – TADDM Integration
TADDM produces XML
output file
SA-TADDM Integration
provides VBA integration
that utilizes XML Mapping
file to import TADDM info
into SA definition/property
set
TADDM Export
SA-TADDM
Mapping File
SA-TADDM
Integration in
SA
65. Perform Lab 8: Infrastructure Analysis
1. Import Infrastructure Info from CMDB tool
2. Create Heatmap
67. Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products in connection with this
publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any
IBM patents, copyrights, trademarks or other intellectual property right.
• IBM, the IBM logo, ibm.com, Bluemix, Blueworks Live, CICS, Clearcase, DOORS®, Enterprise Document
Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand,
ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™,
PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, SoDA, SPSS, StoredIQ, Tivoli®, Trusteer®,
urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of
International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and
service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on
the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml.
68. Thank You
Your Feedback is
Important!
Access the InterConnect 2015
Conference CONNECT Attendee
Portal to complete your session
surveys from your smartphone,
laptop or conference kiosk.