Using CA PPM (CA Clarity PPM) to support resource management and capacity planning continues to be at the top of most organization's wish list. Implementing resource management is not technically challenging, but it is difficult to deploy with any PPM tool. The reason is that resource management is more about process than tool. In this session, we will review resource management best practices and talk about how CA PPM (CA Clarity PPM) can support your process. Come find out how Rego's team has successfully implemented resource management at more than 75 organizations.
You can find the presentation file here:
http://regouniversity.com/presentations-14/
Functional Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free Clarity educational community solutions at http://www.regoxchange.com/
Rego University: Hidden Automation & Gel Scripting, CA PPM (CA Clarity PPM)Rego Consulting
GEL scripting is one of the most powerful but underutilized capabilities in CA PPM (CA Clarity PPM). In this session, you will learn how to create GEL scripts that perform SQL updates, send formatted emails, XOG (import/ export) data in and out of objects, and perform integrations, such as simple GEL that FTPs, loads to a table, and XOGs non labor with error handling.
You can find the presentation file here: http://regouniversity.com/presentations-14/
Technical Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free Clarity educational community solutions at http://www.regoxchange.com/
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Orchestrating workflows Apache Airflow on GCP & AWSDerrick Qin
Working in a cloud or on-premises environment, we all somehow move data from A to B on-demand or on schedule. It is essential to have a tool that can automate recurring workflows. This can be anything from an ETL(Extract, Transform, and Load) job for a regular analytics report all the way to automatically re-training a machine learning model.
In this talk, we will introduce Apache Airflow and how it can help orchestrate your workflows. We will cover key concepts, features, and use cases of Apache Airflow, as well as how you can enjoy Apache Airflow on GCP and AWS by demo-ing a few practical workflows.
Rego University: Hidden Automation & Gel Scripting, CA PPM (CA Clarity PPM)Rego Consulting
GEL scripting is one of the most powerful but underutilized capabilities in CA PPM (CA Clarity PPM). In this session, you will learn how to create GEL scripts that perform SQL updates, send formatted emails, XOG (import/ export) data in and out of objects, and perform integrations, such as simple GEL that FTPs, loads to a table, and XOGs non labor with error handling.
You can find the presentation file here: http://regouniversity.com/presentations-14/
Technical Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free Clarity educational community solutions at http://www.regoxchange.com/
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Orchestrating workflows Apache Airflow on GCP & AWSDerrick Qin
Working in a cloud or on-premises environment, we all somehow move data from A to B on-demand or on schedule. It is essential to have a tool that can automate recurring workflows. This can be anything from an ETL(Extract, Transform, and Load) job for a regular analytics report all the way to automatically re-training a machine learning model.
In this talk, we will introduce Apache Airflow and how it can help orchestrate your workflows. We will cover key concepts, features, and use cases of Apache Airflow, as well as how you can enjoy Apache Airflow on GCP and AWS by demo-ing a few practical workflows.
This presentation is primarily focused on how to use collectd (http://collectd.org/) to gather data from the Postgres statistics tables. Examples of how to use collectd with Postgres will be shown. There is some hackery involved to make collectd do a little more and collect more meaningful data from Postgres. These small patches will be explored. A small portion of the discussion will be about how to visualize the data.
Performance Troubleshooting Using Apache Spark MetricsDatabricks
Performance troubleshooting of distributed data processing systems is a complex task. Apache Spark comes to rescue with a large set of metrics and instrumentation that you can use to understand and improve the performance of your Spark-based applications. You will learn about the available metric-based instrumentation in Apache Spark: executor task metrics and the Dropwizard-based metrics system. The talk will cover how Hadoop and Spark service at CERN is using Apache Spark metrics for troubleshooting performance and measuring production workloads. Notably, the talk will cover how to deploy a performance dashboard for Spark workloads and will cover the use of sparkMeasure, a tool based on the Spark Listener interface. The speaker will discuss the lessons learned so far and what improvements you can expect in this area in Apache Spark 3.0.
Oracle E-Business Suite 12.2 - The Upgrade to End All UpgradesShiri Amit
This business-led session discusses key roadmap and project planning considerations for organizations that are thinking to upgrade. It combines lessons learned from customers that have completed the upgrade with advice from Oracle user group leaders.
VictoriaMetrics 15/12 Meet Up: 2022 Features HighlightsVictoriaMetrics
2022 Features Highlights - Speaker
* MetricsQL Features
- Support for @ modifier
- keep_metric_names modifier
- Advanced label filters’ propagation
- Automatic label filters’ propagation
- Support for short numeric constants
- Distributed query tracing!
- New functions
* vmui Features
- Cardinality explorer!
- Top queries
- Significantly improved usability and stability!
* vmagent Features
- Fetch target response on behalf of vmagent
- Filter targets by url and labels
- /service-discovery page
- Relabel debugging!
- support for absolute _address_
- New service discovery mechanisms
- Multi-tenant support
- Performance improvements
* Relabeling Features
- Conditional relabeling
- Named label placeholders
- Graphite-style relabeling
* vmalert Features
- Better integration with Grafana alerts
- Reusable templates for annotations
- Debugging of alerting rules
- Improved compatibility with Prometheus
* vmctl Features
- Migrate all the data between clusters
- Data migration via Prometheus remote_read protocol
Enterprise Features
- mTLS support
- vmgateway JWT token enhancements
- Automatic restore from backups
- Automatic vmstorage discovery
- Multiple retentions
Various Enhancements
- Environment vars can be referred in command-line flags
- Performance improvements
- Deduplication and downsampling improvements
- Support for metrics push
- Support for Pushgateway data ingestion format
- VictoriaMetrics cluster: multitenancy enhancements
- vmbackup / vmrestore: Azure blob storage
- Official MacOS builds
- Official FreeBSD and OpenBSD builds
- Raspberry PI optimizations
- LTS releases
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
we will see an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed. Afterward, will cover all fundamental of Spark components. Furthermore, we will learn about Spark’s core abstraction and Spark RDD. For more detailed insights, we will also cover spark features, Spark limitations, and Spark Use cases.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Monitor Apache Spark 3 on Kubernetes using Metrics and PluginsDatabricks
This talk will cover some practical aspects of Apache Spark monitoring, focusing on measuring Apache Spark running on cloud environments, and aiming to empower Apache Spark users with data-driven performance troubleshooting. Apache Spark metrics allow extracting important information on Apache Spark’s internal execution. In addition, Apache Spark 3 has introduced an improved plugin interface extending the metrics collection to third-party APIs. This is particularly useful when running Apache Spark on cloud environments as it allows measuring OS and container metrics like CPU usage, I/O, memory usage, network throughput, and also measuring metrics related to cloud filesystems access. Participants will learn how to make use of this type of instrumentation to build and run an Apache Spark performance dashboard, which complements the existing Spark WebUI for advanced monitoring and performance troubleshooting.
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training**
This Edureka tutorial on PySpark Tutorial will provide you with a detailed and comprehensive knowledge of Pyspark, how it works, the reason why python works best with Apache Spark. You will also learn about RDDs, data frames and mllib.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Video of the presentation can be seen here: https://www.youtube.com/watch?v=uxuLRiNoDio
The Data Source API in Spark is a convenient feature that enables developers to write libraries to connect to data stored in various sources with Spark. Equipped with the Data Source API, users can load/save data from/to different data formats and systems with minimal setup and configuration. In this talk, we introduce the Data Source API and the unified load/save functions built on top of it. Then, we show examples to demonstrate how to build a data source library.
Peter Marshall, Technology Evangelist at Imply
Abstract: Apache Druid® can revolutionise business decision-making with a view of the freshest of fresh data in web, mobile, desktop, and data science notebooks. In this talk, we look at key activities to integrate into Apache Druid POCs, discussing common hurdles and signposting to important information.
Bio: Peter Marshall (https://petermarshall.io) is an Apache Druid Technology Evangelist at Imply (http://imply.io/), a company founded by original developers of Apache Druid. He has 20 years architecture experience in CRM, EDRM, ERP, EIP, Digital Services, Security, BI, Analytics, and MDM. He is TOGAF certified and has a BA degree in Theology and Computer Studies from the University of Birmingham in the United Kingdom.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
Performance Optimizations in Apache ImpalaCloudera, Inc.
Apache Impala is a modern, open-source MPP SQL engine architected from the ground up for the Hadoop data processing environment. Impala provides low latency and high concurrency for BI/analytic read-mostly queries on Hadoop, not delivered by batch frameworks such as Hive or SPARK. Impala is written from the ground up in C++ and Java. It maintains Hadoop’s flexibility by utilizing standard components (HDFS, HBase, Metastore, Sentry) and is able to read the majority of the widely-used file formats (e.g. Parquet, Avro, RCFile).
To reduce latency, such as that incurred from utilizing MapReduce or by reading data remotely, Impala implements a distributed architecture based on daemon processes that are responsible for all aspects of query execution and that run on the same machines as the rest of the Hadoop infrastructure. Impala employs runtime code generation using LLVM in order to improve execution times and uses static and dynamic partition pruning to significantly reduce the amount of data accessed. The result is performance that is on par or exceeds that of commercial MPP analytic DBMSs, depending on the particular workload. Although initially designed for running on-premises against HDFS-stored data, Impala can also run on public clouds and access data stored in various storage engines such as object stores (e.g. AWS S3), Apache Kudu and HBase. In this talk, we present Impala's architecture in detail and discuss the integration with different storage engines and the cloud.
Rego University: Portfolio Management, CA PPM (CA Clarity PPM)Rego Consulting
Effective project portfolio management can be a game changer. However, most organizations don't understand what it really means and how to implement it. Portfolio management is all about demand planning and investment rationalization. Providing enough information about investments to make informed decisions is at the heart of CA PPM (CA Clarity PPM) portfolio management functionality. In this session, you will learn about CA PPM's (CA Clarity PPM) portfolio management functionality in detail and how that functionality can be used within an overall portfolio management process.
You can find the presentation file here:
http://regouniversity.com/presentations-14/
Functional Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free solutions at http://www.regoxchange.com/
This presentation is primarily focused on how to use collectd (http://collectd.org/) to gather data from the Postgres statistics tables. Examples of how to use collectd with Postgres will be shown. There is some hackery involved to make collectd do a little more and collect more meaningful data from Postgres. These small patches will be explored. A small portion of the discussion will be about how to visualize the data.
Performance Troubleshooting Using Apache Spark MetricsDatabricks
Performance troubleshooting of distributed data processing systems is a complex task. Apache Spark comes to rescue with a large set of metrics and instrumentation that you can use to understand and improve the performance of your Spark-based applications. You will learn about the available metric-based instrumentation in Apache Spark: executor task metrics and the Dropwizard-based metrics system. The talk will cover how Hadoop and Spark service at CERN is using Apache Spark metrics for troubleshooting performance and measuring production workloads. Notably, the talk will cover how to deploy a performance dashboard for Spark workloads and will cover the use of sparkMeasure, a tool based on the Spark Listener interface. The speaker will discuss the lessons learned so far and what improvements you can expect in this area in Apache Spark 3.0.
Oracle E-Business Suite 12.2 - The Upgrade to End All UpgradesShiri Amit
This business-led session discusses key roadmap and project planning considerations for organizations that are thinking to upgrade. It combines lessons learned from customers that have completed the upgrade with advice from Oracle user group leaders.
VictoriaMetrics 15/12 Meet Up: 2022 Features HighlightsVictoriaMetrics
2022 Features Highlights - Speaker
* MetricsQL Features
- Support for @ modifier
- keep_metric_names modifier
- Advanced label filters’ propagation
- Automatic label filters’ propagation
- Support for short numeric constants
- Distributed query tracing!
- New functions
* vmui Features
- Cardinality explorer!
- Top queries
- Significantly improved usability and stability!
* vmagent Features
- Fetch target response on behalf of vmagent
- Filter targets by url and labels
- /service-discovery page
- Relabel debugging!
- support for absolute _address_
- New service discovery mechanisms
- Multi-tenant support
- Performance improvements
* Relabeling Features
- Conditional relabeling
- Named label placeholders
- Graphite-style relabeling
* vmalert Features
- Better integration with Grafana alerts
- Reusable templates for annotations
- Debugging of alerting rules
- Improved compatibility with Prometheus
* vmctl Features
- Migrate all the data between clusters
- Data migration via Prometheus remote_read protocol
Enterprise Features
- mTLS support
- vmgateway JWT token enhancements
- Automatic restore from backups
- Automatic vmstorage discovery
- Multiple retentions
Various Enhancements
- Environment vars can be referred in command-line flags
- Performance improvements
- Deduplication and downsampling improvements
- Support for metrics push
- Support for Pushgateway data ingestion format
- VictoriaMetrics cluster: multitenancy enhancements
- vmbackup / vmrestore: Azure blob storage
- Official MacOS builds
- Official FreeBSD and OpenBSD builds
- Raspberry PI optimizations
- LTS releases
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
we will see an overview of Spark in Big Data. We will start with an introduction to Apache Spark Programming. Then we will move to know the Spark History. Moreover, we will learn why Spark is needed. Afterward, will cover all fundamental of Spark components. Furthermore, we will learn about Spark’s core abstraction and Spark RDD. For more detailed insights, we will also cover spark features, Spark limitations, and Spark Use cases.
Big data real time architectures -
How do to big data processing in real time?
What architectures are out there to support this paradigm?
Which one should we choose?
What Advantages / Pitfalls they contain.
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...Databricks
Uber has real needs to provide faster, fresher data to data consumers & products, running hundreds of thousands of analytical queries everyday. Uber engineers will share the design, architecture & use-cases of the second generation of ‘Hudi’, a self contained Apache Spark library to build large scale analytical datasets designed to serve such needs and beyond. Hudi (formerly Hoodie) is created to effectively manage petabytes of analytical data on distributed storage, while supporting fast ingestion & queries. In this talk, we will discuss how we leveraged Spark as a general purpose distributed execution engine to build Hudi, detailing tradeoffs & operational experience. We will also show to ingest data into Hudi using Spark Datasource/Streaming APIs and build Notebooks/Dashboards on top using Spark SQL.
Data Quality With or Without Apache Spark and Its EcosystemDatabricks
Few solutions exist in the open-source community either in the form of libraries or complete stand-alone platforms, which can be used to assure a certain data quality, especially when continuous imports happen. Organisations may consider picking up one of the available options – Apache Griffin, Deequ, DDQ and Great Expectations. In this presentation we’ll compare these different open-source products across different dimensions, like maturity, documentation, extensibility, features like data profiling and anomaly detection.
Monitor Apache Spark 3 on Kubernetes using Metrics and PluginsDatabricks
This talk will cover some practical aspects of Apache Spark monitoring, focusing on measuring Apache Spark running on cloud environments, and aiming to empower Apache Spark users with data-driven performance troubleshooting. Apache Spark metrics allow extracting important information on Apache Spark’s internal execution. In addition, Apache Spark 3 has introduced an improved plugin interface extending the metrics collection to third-party APIs. This is particularly useful when running Apache Spark on cloud environments as it allows measuring OS and container metrics like CPU usage, I/O, memory usage, network throughput, and also measuring metrics related to cloud filesystems access. Participants will learn how to make use of this type of instrumentation to build and run an Apache Spark performance dashboard, which complements the existing Spark WebUI for advanced monitoring and performance troubleshooting.
Pyspark Tutorial | Introduction to Apache Spark with Python | PySpark Trainin...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training**
This Edureka tutorial on PySpark Tutorial will provide you with a detailed and comprehensive knowledge of Pyspark, how it works, the reason why python works best with Apache Spark. You will also learn about RDDs, data frames and mllib.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Video of the presentation can be seen here: https://www.youtube.com/watch?v=uxuLRiNoDio
The Data Source API in Spark is a convenient feature that enables developers to write libraries to connect to data stored in various sources with Spark. Equipped with the Data Source API, users can load/save data from/to different data formats and systems with minimal setup and configuration. In this talk, we introduce the Data Source API and the unified load/save functions built on top of it. Then, we show examples to demonstrate how to build a data source library.
Peter Marshall, Technology Evangelist at Imply
Abstract: Apache Druid® can revolutionise business decision-making with a view of the freshest of fresh data in web, mobile, desktop, and data science notebooks. In this talk, we look at key activities to integrate into Apache Druid POCs, discussing common hurdles and signposting to important information.
Bio: Peter Marshall (https://petermarshall.io) is an Apache Druid Technology Evangelist at Imply (http://imply.io/), a company founded by original developers of Apache Druid. He has 20 years architecture experience in CRM, EDRM, ERP, EIP, Digital Services, Security, BI, Analytics, and MDM. He is TOGAF certified and has a BA degree in Theology and Computer Studies from the University of Birmingham in the United Kingdom.
What’s the Best PostgreSQL High Availability Framework? PAF vs. repmgr vs. Pa...ScaleGrid.io
Compare top PostgreSQL high availability frameworks - PostgreSQL Automatic Failover (PAF), Replication Manager (repmgr) and Patroni to improve your app uptime. ScaleGrid blog - https://scalegrid.io/blog/whats-the-best-postgresql-high-availability-framework-paf-vs-repmgr-vs-patroni-infographic/
Performance Optimizations in Apache ImpalaCloudera, Inc.
Apache Impala is a modern, open-source MPP SQL engine architected from the ground up for the Hadoop data processing environment. Impala provides low latency and high concurrency for BI/analytic read-mostly queries on Hadoop, not delivered by batch frameworks such as Hive or SPARK. Impala is written from the ground up in C++ and Java. It maintains Hadoop’s flexibility by utilizing standard components (HDFS, HBase, Metastore, Sentry) and is able to read the majority of the widely-used file formats (e.g. Parquet, Avro, RCFile).
To reduce latency, such as that incurred from utilizing MapReduce or by reading data remotely, Impala implements a distributed architecture based on daemon processes that are responsible for all aspects of query execution and that run on the same machines as the rest of the Hadoop infrastructure. Impala employs runtime code generation using LLVM in order to improve execution times and uses static and dynamic partition pruning to significantly reduce the amount of data accessed. The result is performance that is on par or exceeds that of commercial MPP analytic DBMSs, depending on the particular workload. Although initially designed for running on-premises against HDFS-stored data, Impala can also run on public clouds and access data stored in various storage engines such as object stores (e.g. AWS S3), Apache Kudu and HBase. In this talk, we present Impala's architecture in detail and discuss the integration with different storage engines and the cloud.
Rego University: Portfolio Management, CA PPM (CA Clarity PPM)Rego Consulting
Effective project portfolio management can be a game changer. However, most organizations don't understand what it really means and how to implement it. Portfolio management is all about demand planning and investment rationalization. Providing enough information about investments to make informed decisions is at the heart of CA PPM (CA Clarity PPM) portfolio management functionality. In this session, you will learn about CA PPM's (CA Clarity PPM) portfolio management functionality in detail and how that functionality can be used within an overall portfolio management process.
You can find the presentation file here:
http://regouniversity.com/presentations-14/
Functional Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free solutions at http://www.regoxchange.com/
Rego University: Preparing Metrics that Matter, CA PPM (CA Clarity PPM)Rego Consulting
Executives always want to see metrics that help them analyze the work being done within their organization. It is important to show them metrics that they can leverage to drive behavior and increase productivity and success. Learn how to prepare information that is clear and informative. Come find out what is available in CA PPM (CA Clarity PPM), how to automate reporting, available objective and subjective metrics, and how to implement metrics within an organization.
You can find the presentation file here: http://regouniversity.com/presentations-14/
Functional Track Training. For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free solutions at http://www.regoxchange.com/
Rego University: Process Maturity, CA PPM (CA Clarity PPM)Rego Consulting
Functional Track Training: Process Maturity for CA PPM (CA Clarity PPM).
For more CA PPM training, visit http://regouniversity.com or http://regoconsulting.com and find free solutions at http://www.regoxchange.com/
Estimation is critical to IT demand management as today's senior IT executives deal with a familiar challenge - how to balance the size of the development team with the company's software wish list. Modern estimation techniques offer critical insight into this challenge. In this presentation, you will learn the ins and outs of estimation and how to effectively utilize estimation to ensure project success.
Service based / modeled IT operations demands that Infrastructure needs are catered to with minimal disruptions and loss of user experience. Demand and capacity management for a critical cog in IT / service design to ensure that the service / infrastructure is fully available to users through its lifecycle
Deploying Resource Management with MS Project Online_ProjilityMicrosoft
Learn how to effectively plan and manage your most critical resources – people – and see how the PPM platform can transform you from reactive to proactive Resource Management.
Speaker: Rob Hirschmann, Projility
Resource capacity planning guide - best practice, hints, tipsian jones
The ultimate guide to resource capacity planning from Kelloo.
If you are looking to implement a resource planning process or improve what you already have then this is the guide for you.
Covers topics including:
- What is resource capacity planning
- How project prioritisation influences resource planning
- Resource capacity planning best practice
- Resource planning and portfolio management
- Why spreadsheets are not always a good idea for resource planning
If you are a resource planner, resource manager, portfolio manager or manage a PMO this makes essential reading.
Visit https://kelloo.com to learn more about our resource planning and resource capacity planning solution.
Resource Management Maturity - Does Your Resource Management Practice Work Fo...Unanet
How mature is YOUR resource management practice?
Only 25% of respondents in our most recent GAUGE survey said they have reached a “Very Mature” level of resource management practice.
This means that the vast majority of firms just like yours have a lot of improvements that can be made.
Download the slides to take a look at how Nalas transformed their resource management practice.This is a great presentation, no matter if you think you are managing your resources really well, or if you could make some improvements.
You will learn:
*Where you fall on the resource management maturity scale (level 1-5)
*What a practical deployment of an enterprise resource management practice looks like from Nalas
*How you can move up the maturity scale
To learn more, visit www.unanet.com
Unified Resource Capacity Planning - Unite the Top Work Management PlatformsOnePlan Solutions
In today's diverse project management landscape, organizations often find themselves juggling multiple work management tools like Planner, Microsoft Project, Project for the web, Azure DevOps, Jira, Smartsheets, Monday.com and more. This fragmentation can lead to inefficiencies, especially when it comes to holistic enterprise resource capacity planning. Enter OnePlan: the solution designed to seamlessly bridge these platforms for a consolidated view of your resources at an enterprise level.
As more organizations begin to adopt agile on multiple, interdependent teams, how do we ensure that the success within a team can translate to success at the enterprise level?
Presented by: Sanjiv Augustine, President of LitheSpeed
AVATA Webinar: Solutions to Common Demantra & ASCP ChallengesAVATA
As a leading provider of SCP solutions and a 15 year focus with Oracle Supply Chain Solutions, join AVATA as we examine the most common challenges when implementing and configuring Oracle’s Demantra and ASCP planning solutions.
Balancing PM & Software Development Practices by Splunk Sr PMProduct School
Main takeaways:
- Software, Web/Mobile, Product Management and Leveraging the Cloud, AWS & Google Cloud Platform,
- Compiling Detailed Requirements and Design, UI/UX + Software Architecture & Design,
- Balancing Project Management and Software Development Practices, Agile/Scrum, and working with Engineering Teams
Your Challenge
Companies are approving more projects than they can deliver. Most organizations say they have too many projects on the go and an unmanageable and ever-growing backlog of things to get to.
While organizations want to achieve a high throughput of approved projects, many are unable or unwilling to allocate an appropriate level of IT resourcing to adequately match the number of approved initiatives.
Portfolio management practices must find a way to accommodate stakeholder needs without sacrificing the portfolio to low-value initiatives that do not align with business goals.
Our Advice
Critical Insight
Failure to align projects with strategic goals and resource capacity are the most common causes of portfolio waste across organizations. Intake, approval, and prioritization represent the best opportunities to ensure this alignment.
More time spent with stakeholders during the ideation phase to help set realistic expectations for stakeholders and enhance visibility into IT’s capacity and processes is key to both project and organizational success.
Too much intake red tape will lead to an underground economy of projects that escape portfolio oversight, while too little intake formality will lead to a wild west of approvals that could overwhelm the PMO. Finding the right balance of intake formality for your organization is the key to establishing a PMO that has the ability to focus on the right things.
Impact and Result
Eliminate off-the-grid initiatives by establishing a centralized intake process that funnels requests into a single channel.
Improve the throughput of projects through the portfolio by incorporating the constraint of resource capacity to cap the amount of project approvals to that which is realistic.
Silence squeaky wheels and overbearing stakeholders by establishing a progressive approval and prioritization process that gives primacy to the highest value requests.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
2. www.regoconsulting.com Phone: 1-888-813-0444
2
Topics:
● General Overview of Resource Management
● Organizational Challenges
● CA PPM Challenges
● Rego Configurations
● Experiences and Roundtable
○ Danette Purkis from Chubb Group Insurance
○ Johnathan Lang from Oshkosh Corporation
○ Erika Fermaints from HSBC Bank
5. www.regoconsulting.com Phone: 1-888-813-0444
5
Definitions
Allocation: The hours, or % of time, a resource is designated to perform
work on a specific project.
Availability: The number of hours a resource is available to work on any
given day. By default, resources in CA PPM are available 8 hours per
day.
Assignments: The amount of work designated for a resource on a
specific task.
ETC: Estimate to Complete. The number of hours it will take the
resource to complete their work on the task. As actual time is tracked
against the assignment, the ETC will decrement. (Future)
Actuals: Actual work (in hours) the resource has booked on a specific
task via timesheets.
6. www.regoconsulting.com Phone: 1-888-813-0444
6
Roles & Responsibilities
Resource Manager/Booking Manager
(RM/BM)
• Complete Timesheet Approvals
• Maintain Resource Roles
• Maintain Resource Hard Allocations
• Monitor Resource Availability
(Vacation Calendar)
• Monitor Resource Utilization
• Monitor Project Performance data
• Work with Project Manager to review
and update/resolve resource
allocations as needed
• Communicate resource risks and issues
to PMs & IT Leadership
Project Manager (PM)
• Ensure Resource Allocations are sufficient
to meet Project Demand
• Communicate resource issues to the
Resource Manager & IT Leadership
Team Member (TM)
• Enter timesheets in a timely manner
• Note that these Roles & Responsibilities
are specific to Resource Management.
Individuals may have other responsibilities
related to Project Management, or
management of other CA PPM
Investments.
7. www.regoconsulting.com Phone: 1-888-813-0444
7
Flow of Demand
Project
Manager
requests role in
CA PPM on
Project Team
RM/BM Uses
Resource
Finder to
determine
resource based
on availability
and skill match
RM/BM Hard
Allocates
(Commits)
Resource in CA
PPM using the
Booking Status
Portlet
RM/BM
Monitors
overall
utilization of
resources,
looking for
constraints in
capacity
Resource
Manager or
Booking
Manager
receives role in
Unfilled
Requirements
10. www.regoconsulting.com Phone: 1-888-813-0444
10
Process Alignment
Review how Resource Management will impact primary functions
within the organization. Ensure that cross process dependencies are
accounted for and that end users understand the larger integration of
Resource Management.
● Annual Planning
○ Forecasted utilization at role level
○ Internal / external resource forecast
● Project Lifecycle
○ Points at which resource commitments are made
○ Scheduling approach (e.g. Waterfall, Agile)
● Prioritization of Work
○ Establish base work that must occur (e.g. KTLO, Compliance)
○ Investment level prioritization that is standardized
11. www.regoconsulting.com Phone: 1-888-813-0444
11
Role Consolidation
In many organizations, Roles have evolved into descriptive
categorizations for resources. This becomes problematic when
measuring capacity within CA PPM. In addition, it can be confusing to
Project Managers requesting a named resource be assigned to their
project.
● Can be a time consuming process to attain a role list that has
organizational approval
● Utilization of OBS and/or custom attributes may be necessary to
further clarify demand
Business Analyst - India
Business Analyst
Sr. Business Analyst
Business Analyst – Billing Sys.
Business Analyst – IT Information
Systems
12. www.regoconsulting.com Phone: 1-888-813-0444
12
Resource Management Impacts
Similar to process, it is important to account for type of work that is
performed by areas of the organization. There will be a differences in
the number of allocations that need to be managed.
• When reviewing the approach for the teams, also consider the
amount of projects that will display on timesheets
Team Impact
PMO Each resource typically engaged on 1-4 projects at a time
Application
Support
Resources handle multiple applications and the varying projects to
support the application (KTLO vs. enhancement requests).
DBA Usually spread across multiple projects with smaller commitments.
DBAs are also typically on support projects.
Developer May have pooled resources and a development lead engaged on
each project.
13. www.regoconsulting.com Phone: 1-888-813-0444
13
User Adoption
Always consider the impact to the end users and how they will
respond to the decisions that are made. Most resources are
already stretched thin, Resource Management should be
viewed as a solution for them. Not a burden.
• Establish expectations
– Weekly / Bi-Weekly meeting to review organizational capacity
– Set targets for demand fulfillment and allocations
• Use metrics within CA PPM to measure adoption
– Forecasted and Actual Utilization
– Population of base data
• Provide on-going training and mentoring
– Ask the expert sessions
– Team based SME
18. www.regoconsulting.com Phone: 1-888-813-0444
18
Demand Classification
By default, CA PPM provides Role as the main defining characteristic for named
resources and unfilled demand. The role is then used in conjunction with Skills and
OBS to provide the next level of detail.
Issue Approach
CA PPM Skills are difficult to use:
• No functional updates to skills in
multiple releases
• Does not allow for role based filtering
• Capacity not aligned to skills
Utilizing the OBS to route demand:
• Resource OBS typically represents
organization, not functional teams
• Frequently an additional OBS is
needed to support the Staff OBS
function
We encourage the use of skills, despite the
shortcomings. Rego also introduces the use of a
Primary Skill approach. This is discussed in a
moment.
Utilization of multiple OBS is not necessarily a
bad approach. However, we need to be aware
of ongoing maintenance.
19. www.regoconsulting.com Phone: 1-888-813-0444
19
Resource and Team Data
Attributes entered on the resource record are not available in views that display
allocations. This presents a challenge as capacity and resource utilization does not match
data that resides at the resource level.
Issue Approach
• Primary Role is the only attribute carried
forward from the resource record to the
Team
• Changes to the Primary Role need to be
manually updated at the project level
• Custom attributes added to better define
resources must also be added to the team
in order for allocation to be associated
Create and execute “balancing jobs” that tie
the Resource Record information to the Project
Team.
Ensure that the balancing job accounts for the
Assignment Roles also. Financial processing
(timesheets) use the Assignment Role to
associate the hours to a rate.
The “balancing jobs” also allow for the custom
values to be brought into the RM processes
and measurements.
20. www.regoconsulting.com Phone: 1-888-813-0444
20
Primary Skill Overview
● Primary Skill is used to designate the area of specialization for the
resource:
○ “What is the resource known for at within the organization?”
● When the Primary Skill is populated on the Resource Record:
○ Used for Capacity Planning in conjunction with Role
○ Added to the resource skill page
○ Used for matching against unfilled demand
○ Transferred to all of the resources active Project Allocations
The use of Primary Skill allows for a decrease in the number of Roles in
the organization, while allowing for greater detail regarding the
available knowledge base.
21. www.regoconsulting.com Phone: 1-888-813-0444
21
Resource Record Balancing Job
● Resource Record Update
● Team Update
● Balancing Processes
○ Resource Record Skill update
○ Resource Record to Team and Assignment
○ Team Record update for Finder Search
● Ensure a common setup of resource flow between Resource
Planning and Team
22. www.regoconsulting.com Phone: 1-888-813-0444
22
Resource Finder Search
When executing a resource finder search, default population of the search is
preset and cannot be altered. This presents an issue for organizations that want
to customize their search parameters.
Issue Approach
Finder Functions:
• Project Allocation, Project Role, Skills,
Staff OBS are defaulted
• Manual addition of fields can be
added from the team object
attributes, but, must also reside on
the resource record
• Users do not like that their custom
values need to be populated manually
In order for additional values to be added to
the finder search, Rego typically will add the
designated Primary Skill on the unfilled role to
the skills skill listing.
When the resource finder is launched, the skill
is pulled into the search. Resource Finder then
searches for the overall skills on the resource
record.
24. www.regoconsulting.com Phone: 1-888-813-0444
24
Multiple Calendars
Many organizations, especially global, use multiple calendars. The association of the
various calendars to roles and resources creates issues with data consistency. This
issue are more persistent in international deployments of CA PPM.
Issue Approach
Roles are associated to a single calendar:
• Best practice is to use a single set of
roles
• Issues occurs when role is planned
against one calendar and a named
resource is staffed to the project
from another calendar, often leaving
demand on the role
Standard calendar used for all measurement
unit conversions:
• Impacts FTE / Day conversion when
there are multiple calendars
determining availability
Rather than increase the number of Roles to
match the hours, triggers (where applicable)
have been used reference Calendar roles and
replace the project role. In summary, it is a
simple swap that aligns to the planning
function for the project.
Requires customized reporting. This is a core
feature that can only be addressed by CA.
26. www.regoconsulting.com Phone: 1-888-813-0444
26
General Approach
Availability 1 Allocations 2 Utilization 3
What projects resources are
allocated to?
What % of time they are
supporting the projects?
How many resources?
What do they know?
When can they work?
What is the actual amount of time
spent?
Forecasted amount of work effort
remaining at a project level?
Key Outputs to be Expected
• Establish Projects
• Complete Resource records
• Primary Role
• Calendar
• Primary Focus (Skill) as Required
• Primary Specialty as Required
• Allocate resources to projects
• Confirm allocations
• Approve allocations
• Allocate roles to projects to
highlight unfilled demand
• Replace roles with specific
resources
• Track actual hours
• Develop detailed task level WBS
• Create task dependencies
• Create resource task assignments
• Define estimate to complete (ETCs)
for each task assignment
• Schedule WBS to evaluate and
optimize
dependencies/assignments
• Re-schedule to ensure accurate
forecasting of uncompleted work
27. www.regoconsulting.com Phone: 1-888-813-0444
27
Recommendations
● Start with Process Alignment
○ Demo the standard RM functionality to provide context
○ Understand how CA PPM RM will support existing processes and what adjustments will need to
be made
○ If a project prioritization process is not in place, try to achieve some level of formality so that
resource requests can be triaged
● Review current role usage
○ Ensure roles are aligned to project work, not titles
○ Eliminate confusing naming conventions on roles
○ Work with the organization to establish a concise list of roles
● Implement a Resource Record to Team Record Update
○ Primary Role to Project Role
○ Custom Attributes updates
● Leverage CA PPM Skill functions thru custom attributes
○ Create Primary Skill and/or Primary Application fields
○ Utilize population of the resource record to drive initial skill
○ Use Primary Skill and Application for demand routing and capacity management
34. www.regoconsulting.com Phone: 1-888-813-0444
34
HSBC background
● CA PPM/Niku customer since 2005
● Rego Consultant services provided since
2010
● One partition; 3 Main Department Functions
○ Dept 1 – 3,000 + billable resources; fully deployed
○ Dept 2 – 20,000 + billable Resources; Phase I and II
only
○ Dept 3 –300 resources (note: Department will
onboard in 2014-2015)
● 37,200 active resources
○ 31,000+ OTE
○ 27,500+ Billable
● 39,500 + active projects
○ 27,000+ Billable
INTERNAL
35. www.regoconsulting.com Phone: 1-888-813-0444
35
Overview Resource/Project Details
● Gaining value from CA PPM depends on effective use of the tool and increasing
end-user adoption. Project and Resource management metrics are required to
baseline and track adoption as part of monitoring and improving the value of the
data on CA PPM
● CA PPM Maturity Model (CMM) was implemented in 2008 in response to an audit
requirement to track CA PPM usage and data
● CA PPM SWAT are looking to materially improved the existing CMM model to
eliminate “box ticking” and implement metrics that will drive sound resource and
project management behavior