Project Controls Expo, 13th Nov 2013 - "Loading Cost and Activity data into Primavera - a low cost, flexible tool for planners and engineers" By David Kelly
Two Acronyms - CPI:
The Planners CPI – that is Crap/Planning index – how long do you spend on typing, begging for informaCon, reformaDng the informaCon prior to input, checking that what you typed was what you thought you typed, as opposed to actually planning?
The Wright Way into the Cloud: The Argument for ARCSAlithya
Presentation to highlight a formalized, centralized, and comprehensive solution to reduce manual effort on low value reconciliation tasks, Provide transparency, traceability, and auditability into the reconciliation cycle
workflow, incentivize an evaluation of what accounts / reconciliations were needed for the
reconciliation cycle, and provide ease of communication among Preparers, Reviewers, and Administrators
Using InfluxDB for Full Observability of a SaaS Platform by Aleksandr Tavgen,...InfluxData
Aleksandr Tavgen from Playtech, the world’s largest online gambling software supplier, will share how they are using InfluxDB 2.0, Flux, and the OpenTracingAPI to gain full observability of their platform. In addition, he will share how InfluxDB has served as the glue to cope with multiple sets of time series data, especially in the case of understanding online user activity — a use case that is normally difficult without the math functions now available with Flux.
Ted Dunning - Keynote: How Can We Take Flink Forward?Flink Forward
http://flink-forward.org/kb_sessions/keynote-tba/
Apache Flink has come a long way from its academic beginnings. It is now one of the most technically advanced solutions for streaming computation. And batch computation, too. Flink has serious technical advantages when compared with nearly every alternative system.
This success ironically means that Apache Flink is right on the cusp of a critical moment. Over the next few months it will be decided whether Flink is the Next Big Thing or if it is a fine technology with limited impact.
Right now, what you and I do can make a huge difference. But as business people like to say, what got Flink here isn’t what’s going to get it there. The challenges the Flink community faces now are different from the technical challenges it has met so far.
I will talk about what I think will help and how we can all pitch in to take Flink forward.
Speaker: Darlene Nerden, IBM
Overview: In this session will review the Maximo architecture and factors that influence performance. We will discuss some details for those factors regarding tuning for a performance impact. We will look at troubleshooting tools and Maximo settings to help identify and resolve a Maximo performance issue.
The Wright Way into the Cloud: The Argument for ARCSAlithya
Presentation to highlight a formalized, centralized, and comprehensive solution to reduce manual effort on low value reconciliation tasks, Provide transparency, traceability, and auditability into the reconciliation cycle
workflow, incentivize an evaluation of what accounts / reconciliations were needed for the
reconciliation cycle, and provide ease of communication among Preparers, Reviewers, and Administrators
Using InfluxDB for Full Observability of a SaaS Platform by Aleksandr Tavgen,...InfluxData
Aleksandr Tavgen from Playtech, the world’s largest online gambling software supplier, will share how they are using InfluxDB 2.0, Flux, and the OpenTracingAPI to gain full observability of their platform. In addition, he will share how InfluxDB has served as the glue to cope with multiple sets of time series data, especially in the case of understanding online user activity — a use case that is normally difficult without the math functions now available with Flux.
Ted Dunning - Keynote: How Can We Take Flink Forward?Flink Forward
http://flink-forward.org/kb_sessions/keynote-tba/
Apache Flink has come a long way from its academic beginnings. It is now one of the most technically advanced solutions for streaming computation. And batch computation, too. Flink has serious technical advantages when compared with nearly every alternative system.
This success ironically means that Apache Flink is right on the cusp of a critical moment. Over the next few months it will be decided whether Flink is the Next Big Thing or if it is a fine technology with limited impact.
Right now, what you and I do can make a huge difference. But as business people like to say, what got Flink here isn’t what’s going to get it there. The challenges the Flink community faces now are different from the technical challenges it has met so far.
I will talk about what I think will help and how we can all pitch in to take Flink forward.
Speaker: Darlene Nerden, IBM
Overview: In this session will review the Maximo architecture and factors that influence performance. We will discuss some details for those factors regarding tuning for a performance impact. We will look at troubleshooting tools and Maximo settings to help identify and resolve a Maximo performance issue.
Planes, Trains and Automobiles – Handling Infrastructure Assets with FMESafe Software
From airports to railways to roads, infrastructure asset handling is a complex task that inevitably involves a wide range of incompatible formats given the different applications and standards involved in design, management and data exchange. Using real cases, this webinar will demonstrate how FME can overcome data interoperability challenges commonly encountered in infrastructure asset handling and streamline processes.
Maintaining the full primavera suite at a large construction companyp6academy
Referenced: www.p6academy.com
Source: http://coll15.mapyourshow.com
Standardization is key to maintaining an environment in todays ERP. Primavera has some unique challenges related to global data, pollution of data through XER importing, and general change management of difference in the older primavera versions. Cleanup/maintenance process and procedures need to be developed, consistent and followed to truly maintain the environment.
Stream processing consists of ingesting and processing continuously generated data, often from end users in web applications or from more challenging settings where devices such as servers and sensors generate events at a high rate. Such scenarios often demand the use of a software stack that is able to scale and accommodate changes to the characteristics of the application.
One of the major challenges with processing data streams is adapting to workload variations (e.g., due to daily cycles or the growth of the population of sources). Systems to ingest stream data typically parallelize it by sharding the incoming messages and events according to a routing key. Having the ability to parallelize ingestion is very effective, but future changes to the workload (which are very often unknown beforehand) might make the initial choice for the degree of parallelism inadequate for even short-term spikes. Consequently, the ability to scale by adapting parallelism according to workload while preserving important API properties, such as per-key order, is highly desirable to handle mission-critical workloads.
In this presentation, we explain how to accommodate changes to workloads in and with Pravega, an open source stream store built to ingest and serve stream data. Pravega primarily manipulates and stores segments (append-only byte sequences), forming streams by creating and composing segments, which it uses to enable the scaling of streams. Stream scaling in Pravega is automatic and transparent to the application, but such a change to the ingestion volume might also require the application to follow and scale its resources downstream (e.g., the operators of an Apache Flink job) to accommodate the new ingestion volume. Pravega signals such changes to the application so that it can react accordingly. The cooperation between Pravega and the downstream application is crucial for building an effective stream data pipeline.
Event Streaming Architecture for Industry 4.0 - Abdelkrim Hadjidj & Jan Kuni...Flink Forward
New use cases under the Industry 4.0 umbrella are playing a key role in improving factory operations, process optimization, cost reduction and quality improvement. We propose an event streaming architecture to streamline the information flow all the way from the factory to the main data center. Building such a streaming architecture enables a manufacturer to react faster to critical operational events. However, it presents two main challenges:
Data acquisition in real time: data should be collected regardless of its location or access challenges are. It is commonplace to ingest data from hundreds of heterogeneous data sources (ERP, MES, Sensors, maintenance systems, etc).
Event processing in real time: events collected from different parts of the organization should be combined into actionable insights in real time. This is extremely challenging in a context where events can be lost or delayed.
In this talk, we show how Apache NiFi and MiNiFi can be used to collect a wide range of datasources in real-time, connecting the industrial and information worlds. Then, we show how Apache Flink’s unique features enables us to make sense of this data. For instance, we will explain how Flink’s time management such Event Time mode, late arrival handling and watermark mechanism can be used to address the challenge of processing IoT data originating from geographically distributed plants. Finally, we demonstrate an end to end streaming architecture for Industry 4.0 based on the Cloudera DataFlow platform.
Time Series Analysis Using an Event Streaming PlatformDr. Mirko Kämpf
Advanced time series analysis (TSA) requires very special data preparation procedures to convert raw data into useful and compatible formats.
In this presentation you will see some typical processing patterns for time series based research, from simple statistics to reconstruction of correlation networks.
The first case is relevant for anomaly detection and to protect safety.
Reconstruction of graphs from time series data is a very useful technique to better understand complex systems like supply chains, material flows in factories, information flows within organizations, and especially in medical research.
With this motivation we will look at typical data aggregation patterns. We investigate how to apply analysis algorithms in the cloud. Finally we discuss a simple reference architecture for TSA on top of the Confluent Platform or Confluent cloud.
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
Flink Forward Berlin 2018: Aljoscha Krettek & Till Rohrmann - Keynote: "A Yea...Flink Forward
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
Continuous Intelligence - Intersecting Event-Based Business Logic and MLParis Carbone
Modern data-driven business infrastructure is not as effective as it should be when it comes to critical decision making. Eng-to-End Data pipelines are composed out of fundamentally diverse pieces of tech, each focusing on a specific frontend (e.g., DataFrames, Tensors, Streams) and running in total isolation, thus, being highly unoptimised and complex to integrate with event-based business logic. Our research group has been looking into ways we can use advanced systems theory to compile, optimise and execute distributed functions in unison across the whole spectrum of data-driven programming, leading to a unified way to combine analytics and services all the way down to hardware execution and make continuous intelligence a reality.
Key Takeaways
Introducing the concept of Continuous Intelligence and why we are not there yet.
Pinpointing weaknesses in the current way we structure data-driven pipelines today
Explaining the potential of an Intermediate Representation (IR) and Shared Hardware Execution support to solve the problem.
Presenting our vision on how this new tech can be used to radically change the way we declare and distill knowledge from data in a fast-changing world.
Building a real time Tweet map with Flink in six weeksMatthias Kricke
In this talk we present OSTMap, a tool which was build by 6 students over the course of 6 weeks. Each student only has to do as little as 5-10h per week and no experience with BigData or the used frameworks. We also present the concept of geotemporal indicies for our use-case.
Using the R Language in BI and Real Time Applications (useR 2015)Lou Bajuk
R provides tremendous value to statisticians and data scientists, however they are often challenged to integrate their work and extend that value to the rest of their organization. This presentation will demonstrate how the R language can be used in Business Intelligence applications (such as Financial Planning and Budgeting, Marketing Analysis, and Sales Forecasting) to put advanced analytics into the hands of a wider pool of decisions makers. We will also show how R can be used in streaming applications (such as TIBCO Streambase) to rapidly build, deploy and iterate predictive models for real-time decisions. TIBCO's enterprise platform for the R language, TIBCO Enterprise Runtime for R (TERR) will be discussed, and examples will include fraud detection, marketing upsell and predictive maintenance.
Planes, Trains and Automobiles – Handling Infrastructure Assets with FMESafe Software
From airports to railways to roads, infrastructure asset handling is a complex task that inevitably involves a wide range of incompatible formats given the different applications and standards involved in design, management and data exchange. Using real cases, this webinar will demonstrate how FME can overcome data interoperability challenges commonly encountered in infrastructure asset handling and streamline processes.
Maintaining the full primavera suite at a large construction companyp6academy
Referenced: www.p6academy.com
Source: http://coll15.mapyourshow.com
Standardization is key to maintaining an environment in todays ERP. Primavera has some unique challenges related to global data, pollution of data through XER importing, and general change management of difference in the older primavera versions. Cleanup/maintenance process and procedures need to be developed, consistent and followed to truly maintain the environment.
Stream processing consists of ingesting and processing continuously generated data, often from end users in web applications or from more challenging settings where devices such as servers and sensors generate events at a high rate. Such scenarios often demand the use of a software stack that is able to scale and accommodate changes to the characteristics of the application.
One of the major challenges with processing data streams is adapting to workload variations (e.g., due to daily cycles or the growth of the population of sources). Systems to ingest stream data typically parallelize it by sharding the incoming messages and events according to a routing key. Having the ability to parallelize ingestion is very effective, but future changes to the workload (which are very often unknown beforehand) might make the initial choice for the degree of parallelism inadequate for even short-term spikes. Consequently, the ability to scale by adapting parallelism according to workload while preserving important API properties, such as per-key order, is highly desirable to handle mission-critical workloads.
In this presentation, we explain how to accommodate changes to workloads in and with Pravega, an open source stream store built to ingest and serve stream data. Pravega primarily manipulates and stores segments (append-only byte sequences), forming streams by creating and composing segments, which it uses to enable the scaling of streams. Stream scaling in Pravega is automatic and transparent to the application, but such a change to the ingestion volume might also require the application to follow and scale its resources downstream (e.g., the operators of an Apache Flink job) to accommodate the new ingestion volume. Pravega signals such changes to the application so that it can react accordingly. The cooperation between Pravega and the downstream application is crucial for building an effective stream data pipeline.
Event Streaming Architecture for Industry 4.0 - Abdelkrim Hadjidj & Jan Kuni...Flink Forward
New use cases under the Industry 4.0 umbrella are playing a key role in improving factory operations, process optimization, cost reduction and quality improvement. We propose an event streaming architecture to streamline the information flow all the way from the factory to the main data center. Building such a streaming architecture enables a manufacturer to react faster to critical operational events. However, it presents two main challenges:
Data acquisition in real time: data should be collected regardless of its location or access challenges are. It is commonplace to ingest data from hundreds of heterogeneous data sources (ERP, MES, Sensors, maintenance systems, etc).
Event processing in real time: events collected from different parts of the organization should be combined into actionable insights in real time. This is extremely challenging in a context where events can be lost or delayed.
In this talk, we show how Apache NiFi and MiNiFi can be used to collect a wide range of datasources in real-time, connecting the industrial and information worlds. Then, we show how Apache Flink’s unique features enables us to make sense of this data. For instance, we will explain how Flink’s time management such Event Time mode, late arrival handling and watermark mechanism can be used to address the challenge of processing IoT data originating from geographically distributed plants. Finally, we demonstrate an end to end streaming architecture for Industry 4.0 based on the Cloudera DataFlow platform.
Time Series Analysis Using an Event Streaming PlatformDr. Mirko Kämpf
Advanced time series analysis (TSA) requires very special data preparation procedures to convert raw data into useful and compatible formats.
In this presentation you will see some typical processing patterns for time series based research, from simple statistics to reconstruction of correlation networks.
The first case is relevant for anomaly detection and to protect safety.
Reconstruction of graphs from time series data is a very useful technique to better understand complex systems like supply chains, material flows in factories, information flows within organizations, and especially in medical research.
With this motivation we will look at typical data aggregation patterns. We investigate how to apply analysis algorithms in the cloud. Finally we discuss a simple reference architecture for TSA on top of the Confluent Platform or Confluent cloud.
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
Flink Forward Berlin 2018: Aljoscha Krettek & Till Rohrmann - Keynote: "A Yea...Flink Forward
Stream processing still evolves and changes at a speed that can make it hard to keep up with the developments. Being at the forefront of stream processing technology, the evolution of Apache Flink has mirrored many of these developments and continues to do so.
We will take you on a journey through the major milestones of stream processing technology in past years, diving into the latest additions that Apache Flink and other communities introduced to the stream processing landscape, such as Streamng SQL, Time Versioned Tables, cluster-library-duality, language portability, etc.
We will take a sneak peek into our crystal ball and present in what the Flink community is working on next.
Continuous Intelligence - Intersecting Event-Based Business Logic and MLParis Carbone
Modern data-driven business infrastructure is not as effective as it should be when it comes to critical decision making. Eng-to-End Data pipelines are composed out of fundamentally diverse pieces of tech, each focusing on a specific frontend (e.g., DataFrames, Tensors, Streams) and running in total isolation, thus, being highly unoptimised and complex to integrate with event-based business logic. Our research group has been looking into ways we can use advanced systems theory to compile, optimise and execute distributed functions in unison across the whole spectrum of data-driven programming, leading to a unified way to combine analytics and services all the way down to hardware execution and make continuous intelligence a reality.
Key Takeaways
Introducing the concept of Continuous Intelligence and why we are not there yet.
Pinpointing weaknesses in the current way we structure data-driven pipelines today
Explaining the potential of an Intermediate Representation (IR) and Shared Hardware Execution support to solve the problem.
Presenting our vision on how this new tech can be used to radically change the way we declare and distill knowledge from data in a fast-changing world.
Building a real time Tweet map with Flink in six weeksMatthias Kricke
In this talk we present OSTMap, a tool which was build by 6 students over the course of 6 weeks. Each student only has to do as little as 5-10h per week and no experience with BigData or the used frameworks. We also present the concept of geotemporal indicies for our use-case.
Using the R Language in BI and Real Time Applications (useR 2015)Lou Bajuk
R provides tremendous value to statisticians and data scientists, however they are often challenged to integrate their work and extend that value to the rest of their organization. This presentation will demonstrate how the R language can be used in Business Intelligence applications (such as Financial Planning and Budgeting, Marketing Analysis, and Sales Forecasting) to put advanced analytics into the hands of a wider pool of decisions makers. We will also show how R can be used in streaming applications (such as TIBCO Streambase) to rapidly build, deploy and iterate predictive models for real-time decisions. TIBCO's enterprise platform for the R language, TIBCO Enterprise Runtime for R (TERR) will be discussed, and examples will include fraud detection, marketing upsell and predictive maintenance.
McLachlan Lister provides a range of management consulting and project management services. These are offered either discretely or as an integrated service - you control the depth of our relationship:
As presented at the AACE 2016 General Meeting, this presentation provides unique solutions to Primavera P6 problems, like cleaning large XER files, finding relationship lag in p6, automating XER backups, easy DCMA 14-point schedule analysis, importing XER files and more. This presentation also describes a better way to train on Primavera P6.
How to prepare recovery or revised schedule rev.2Abdelhay Ghanem
See my solution for one of the most hard issue which is facing most of planners while they are preparing recovery schedule.
I prepared, then uploaded a file to explain how to prepare a recovery baseline schedule (In English). With the new concluded equation (Ghanem equation) it is now easy to apply on Primavera.
Construction Delay Analysis, SimplifiedMichael Pink
Learn how to perform a delay analysis in the construction industry. Capture and study your impacts to determine why a project was late. Use this proven method to ensure that you get paid for delays caused by others.
Similar to Project Controls Expo, 13th Nov 2013 - "Loading Cost and Activity data into Primavera - a low cost, flexible tool for planners and engineers" By David Kelly
Initiative Based Technology Consulting Case Studieschanderdw
Our initiative-based “pay-as-you-go” model empowers you to buy only the services you need without long-term contract obligations, and better optimizes your resources with greater accuracy and efficiency.
An agile, flexible technology partner using this model helps clients secure resources in advance, map them to their initiatives, and enjoy on-demand service availability--which means real-time project control.
You gain improved transparency for your tech spend with predictable cash flow that is consumption-based. The client benefits from utilizing resources only as and when required during the lifecycle of the technology initiative.
This a RECAP project overview slide deck prepared by Thang Le Duc (UMU), P-O Östberg (UMU) and Tomas Brännström (Tieto). It starts with an introduction and continues with a section on challenges for a self-orchestrated, self-remediated cloud system. It then presents the RECAP vision and use cases and finishes with a conclusion.
Are you planning to move existing applications to the cloud and want to avoid setbacks? These slides are from a webinar jointly presented by Atmosera and iTrellis, LLC. The webinar can help you find out how to assess your needs, plan out a migration and successfully operate your applications in a modern cloud environment. The webinar will provide the following answers:
* What re-platforming means and why you need to think about it
* How to take full advantage of a cloud such as Azure: agility, flexibility, and cost savings
* Lessons learned and best practices for planning a successful move to a modern cloud.
The full webinar playback URL is at https://www.atmosera.com/webinar-replatforming-application-cloud/
El desarrollo orientado hacia la nube es una realidad. Muchas empresas han reemplazado sus herramientas y modificado sus operaciones para obtener beneficios ofrecidos por este nuevo paradigma. Durante esta sesión se pretende abordar temas relacionados con el surgimiento de estas tecnologías. Entre los cuales destacan los distintos modelos de servicio y despliegue, estrategias para la adopción y el uso de herramientas existentes como Kubernetes.
The RECAP Project: Large Scale Simulation FrameworkRECAP Project
In this presentation, Sergej Svorobej (DCU) gave a brief overview of RECAP and introduced the large scale simulation framework used in the project. The event was held in conjunction with the National Conference on Cloud Computing and Commerce (http://2018.nc4.ie/) and took place April 10, 2018 in Dublin, Ireland.
Learn more about RECAP: https://recap-project.eu/
Using the NERD stack to move on from XPages - a new beginning
This customer solution based session is about our first production project on domino-db, the Domino AppDevPack and the IAM service to extend and outreach a large XPages based Web Portal. While this project might still be ongoing when it's time to do this presentation, we will present our way forward using declarative front ends based on web components and domino-db based REST APIs to build a solution to move the customer's external web portal above and beyond what XPages were able to do. BUT - we still keep the benefits of quick turnaround times and low maintenance costs. See how we build domino views, forms and search capabilities like never before!
Reimagining Devon Energy’s Data Estate with a Unified Approach to Integration...Databricks
Devon Energy is a Fortune 500 company focused on unconventional upstream oil and gas production. With a companywide focus on innovation and data-driven decision making, IT has been challenged to make more data available to more people more quickly. To this end, we have leveraged the scale of Microsoft Azure and Databricks’ Unified Analytics Platform to help reimagine our integration, data warehousing and analytics landscape to improve agility while moving our workloads to the cloud. We are in the third year of this transformation and have lessons learned around improving the testability of data pipelines, code management, model training and deployment, promotion, and user empowerment. In this talk, we will share our experience managing the lifecycle of data engineering and machine learning solutions and striking the balance between agility and reliability in a single platform, while democratizing data access to users from all disciplines across the company.
Author: Paul Bruffett
ARES PRISM - Integrating Project & Portfolio Management by Lateef Daly at Pro...Project Controls Expo
ARES PRISM - Integrating Project & Portfolio Management by "Lateef Daly"- Director of Operations, Europe & Middle East for ARES PRISM, UK at Project Controls Expo 2017, Arsenal Stadium, London
Faster, more Secure Application Modernization and Replatforming with PKS - Ku...VMware Tanzu
Faster, more Secure Application Modernization and Replatforming with PKS - Kubernetes for the Enterprise - London
Alex Ley
Associate Director, App Transformation, Pivotal EMEA
28th March 2018
MapR on Azure: Getting Value from Big Data in the Cloud -MapR Technologies
Public cloud adoption is exploding and big data technologies are rapidly becoming an important driver of this growth. According to Wikibon, big data public cloud revenue will grow from 4.4% in 2016 to 24% of all big data spend by 2026. Digital transformation initiatives are now a priority for most organizations, with data and advanced analytics at the heart of enabling this change. This is key to driving competitive advantage in every industry.
There is nothing better than a real-world customer use case to help you understand how to get value from big data in the cloud and apply the learnings to your business. Join Microsoft, MapR, and Sullexis on November 10th to:
Hear from Sullexis on the business use case and technical implementation details of one of their oil & gas customers
Understand the integration points of the MapR Platform with other Azure services and why they matter
Know how to deploy the MapR Platform on the Azure cloud and get started easily
You will also get to hear about customer use cases of the MapR Converged Data Platform on Azure in other verticals such as real estate and retail.
Speakers
Rafael Godinho
Technical Evangelist
Microsoft Azure
Tim Morgan
Managing Director
Sullexis
Similar to Project Controls Expo, 13th Nov 2013 - "Loading Cost and Activity data into Primavera - a low cost, flexible tool for planners and engineers" By David Kelly (20)
National Grids Project Controls Journey – past, present and futureProject Controls Expo
National Grids Project Controls Journey – past, present and future By, Tim Fenemore - Head of Project Controls for National Grid, UK
Sanjay Lodhia - Head of Governance & Assurance for National Grid, UK
Tony Abiad - Head of Estimating & Cost for National Grid, UK
Balancing burdens of proof in dispute avoidance and resolution by "Ari Isaacs...Project Controls Expo
Balancing burdens of proof in dispute avoidance and resolution by "Ari Isaacs - Founder and CEO for ShapeDo, Israel" at Project Controls Expo 2017, Arsenal Stadium, London
Managing Changes on a Capital Programme by "Iain Cameron - Technical Director...Project Controls Expo
Managing Changes on a Capital Programme by "Iain Cameron - Technical Director for LogiKal Projects, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Managing the risk of change by "Simon White - Risk Management Consultant for ...Project Controls Expo
Managing the risk of change by "Simon White - Risk Management Consultant for Trigo White Ltd, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Economic Value Chains - costing the impact of risk by "Colin Sandall - Senior...Project Controls Expo
Economic Value Chains - costing the impact of risk by "Colin Sandall - Senior Consultant for QinetiQ, UK" & "Peter Tart - Senior Consultant for QinetiQ, UK at Project Controls Expo 2017, Arsenal Stadium, London
What makes a good plan, and how do you know you’ve got one? by "Paul Kidston ...Project Controls Expo
What makes a good plan, and how do you know you’ve got one? by "Paul Kidston - Director of Project Controls for Costain Group PLC, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Next Generation Planning for Mega Projects by "Dr. Dan Patterson - CEO and Fo...Project Controls Expo
Next Generation Planning for Mega Projects by "Dr. Dan Patterson - CEO and Founder for Basis PM, Texas, USA" at Project Controls Expo 2017, Arsenal Stadium, London
Governance: An Enabler? by " Ian Beaumont - Delivery Partner Programme Direc...Project Controls Expo
Governance: An Enabler? by " Ian Beaumont - Delivery Partner Programme Director for WSP, UK" & "Danny Vaughan - Head of Metrolink for TfGM, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Governance and the art of decision making on Crossrail by "Walter Macharg - H...Project Controls Expo
Governance and the art of decision making on Crossrail by "Walter Macharg - Head of Change Control and Cost Assurance for Crossrail, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Adding Value through the Lower Thames Crossing project by "Iain Minns - Partn...Project Controls Expo
Adding Value through the Lower Thames Crossing project by "Iain Minns - Partner, UK Head of Programme & Project Controls for Arcadis, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Governance and Assurance for Nationally Significant Infrastructure Projects b...Project Controls Expo
Governance and Assurance for Nationally Significant Infrastructure Projects by "Terri Harrington"- Sponsorship Director - Complex Infrastructure Programme (CIP) for Highways England, UK at Project Controls Expo 2017, Arsenal Stadium, London
A review of whether interdependency exists in effective Project Control Metho...Project Controls Expo
A review of whether interdependency exists in effective Project Control Methods and project performance by "Michael Halliday" Director for HKA, UK at Project Controls Expo 2017, Arsenal Stadium, London
A Source of Project Cost Integration by "David Hurren - Technical Director fo...Project Controls Expo
A Source of Project Cost Integration by "David Hurren - Technical Director for RPCuk, UK Marie Trembacki - Associate Director for Faithful+Gould, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Programmatic Controls Approach to; Fast track Disaster Recovery by "Saurabh B...Project Controls Expo
Programmatic Controls Approach to; Fast track Disaster Recovery by "Saurabh Bhandari - Technical Principal - Project & Programme Controls for Mott MacDonald, UK" at Project Controls Expo 2017, Arsenal Stadium, London
Enterprise Scheduling and Risk Management at Los Angeles Metro by "Julie Owen...Project Controls Expo
Enterprise Scheduling and Risk Management at Los Angeles Metro by "Julie Owen - DEO Program Management for LA Metro, Los Angeles" at Project Controls Expo 2017, Arsenal Stadium, London
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Media as a Mind Controlling Strategy In Old and Modern Era
Project Controls Expo, 13th Nov 2013 - "Loading Cost and Activity data into Primavera - a low cost, flexible tool for planners and engineers" By David Kelly
1.
Copyright
@
2011.
All
rights
reserved
Loading
Cost
and
Ac.vity
data
into
Primavera
-‐
a
low
cost,
flexible
tool
for
planners
and
engineers.
Project
Controls
Expo
–
13th
Nov
2013
Twickenham
Stadium,
London
2.
Copyright
@
2011.
All
rights
reserved
About
David
Kelly
• IT
consultant
in
project
management
systems
• 37
years’
experience
in
project
planning
for
more
that
30
years
in
offshore
oil,
construc.on,
and
engineering.
• Vast
prac.cal
experience
of
Primavera
and
a
deep
knowledge
of
systems
• Influen.al
in
the
design
of
Legare
3. Two Acronyms - CPI:
The
Planners
CPI
–
that
is
Crap/Planning
index
–
how
long
do
you
spend
on
typing,
begging
for
informaCon,
reformaDng
the
informaCon
prior
to
input,
checking
that
what
you
typed
was
what
you
thought
you
typed,
as
opposed
to
actually
planning?
4. Two Acronyms - KPI:
Only
one
KPI
for
project
controls
How
long
is
it
from
the
data
date
cut-‐off
to
the
new
bar-‐chart?
5. Integration: • Why
-‐
The
failure
of
object
oriented
databases
in
the
late
1990's.
• What
–
Almost
all
P6
data
starts
somewhere
else
• When
–
Real
Cme
updaCng?
What
about
the
Data
Date?
• Who
–
Collabro's
“Legare”
is
a
universal
bi-‐
direcConal
interface
6. Two examples of
Integration with
Legare :
• Petrofac
–
Offshore
ConstrucCon
and
Maintenance
• Heathrow
–
an
airport
in
south
Englandshire
7. Creating a new Project
Controls Environment
Petrofac
Oracle's
EBS
and
P6
being
installed
at
the
same
Cme
Heathrow
Huge,
mulC-‐database
P6
environment
8. Petrofac – The Legacy
data load: • 40,000
ac.vi.es
in
40
projects
• 500
ac.vity
code
dic.onaries.
• Recalculate
rela.onship
lag.
• Renumbering
ac.vi.es
and
WBS
• From
data
dump
to
nightly
job
• Now
what?
10. Heathrow: • Huge
implementa.on
of
P6
to
replace
legacy
system
• Two
P6
databases,
one
for
MSP
(contractor)
projects,
one
for
Heathrow
Project
controls
• This
was
a
big
job.
I
was
involved
in
delivering
a
share
of
the
1500
student
days
of
training
prior
to
go
live.
• As
with
Petrofac,
there
was
a
huge
legacy
data
issue.
HAL
chose
a
bespoke
solu.on.
11. Tranche 1 (Live 17th June 2013)
11
§ PPM
Standalone
§ P6
EPPM
§ Legare
§ PRA
§ BI
Publisher
§ P6
AnalyCcs
§ GlobalView
Data
Archive
(detailed
historical
data)
12. Supplier & Client Databases
12
Client
Database
Supplier
Database
• Client
Database
• Controlled
to
deliver
consistent
reporting
• Structuring
of
programmes
and
projects
• Supplier
Database
• Flexible
to
support
detail
schedules
• Apply
codes
to
provide
reliable
upward
reporting
Detail
Summary
Data
Reporting
References
• Client
and
Supplier
databases
are
physically
separated
• Reporting
references
are
provided
to
the
supplier
database
to
enable
summarisation
Reporting
14. Business Requirement
§ Load
resource
profile
updates
§ Data
sources:
• P6
EPPM
Client
database
• P6
EPPM
Supplier
database
• Other
external
data
sources
(Excel
spreadsheets)
§ Target
databases:
• P6
EPPM
Client
database
• P6
EPPM
Supplier
database
§ Volumes:
• Circa
1401
live
projects
in
P6
EPPM
Client,
sourced
from
mul.ple
supplier
schedules/spreadsheets
• Circa
31501
work
packages
across
the
live
projects
• Typically
1-‐2
resource
assignments
per
work
package
§ Frequency:
• Primarily
at
month
end,
with
ad-‐hoc
use
through
the
month
1
Based
on
Artemis
Aug12
month
end
data
17. Heathrow – Cost Loading:
• Examined
3
alterna.ves:
•
Bespoke
soluCon
(leveraging
investment
in
a
non-‐BAU
migraCon
cost
loader)
•
Legare
•
Excel
based
soluCon
developed
by
an
MSP
18. Heathrow – Cost Loading:
Evalua.on
Criteria:
•
Func.onality
•
Flexibility
•
Support
Capability
•
Affordability
•
Timeliness
of
Deployment
20. Heathrow – What next:
Legare
was
so
successful
that:
• Bespoke
development
was
halted
• Legacy
data
moved
by
Legare
21. What
Everybody
Asks
What
Does
Legare
do?
Who
is
using
it?
How
Much
Does
it
Cost?
How
Much
will
it
save
me?
22. What
Legare
Does
• Loads
most
types
of
data
from
any
source
into
P6
• Quicker
&
more
accurately
than
people
or
other
systems
• Automates
all
Extract
Transform,
Load
and
error
checking
• Exports
accurate,
“cleansed”
data
for
• Integrated
Planning,
risks,
cost
repor.ng
Dashboarding
e.g.
Acumen,
Ecosys,
Hard
Dollar,
P6
• Write
back
to
ERP,
EAM
and
opera.onal
systems
• Cheaper
and
faster
than
people
or
other
tools
23. Legare
Primavera
Hard
Dollar
Acumen
&
Ecosys
P6
Scheduling
What’s
inside
Legare:
• Connectors:
XER,
MSP,
Oracle,
Excel,
Oracle,
Access,
bespoke
• Easy
Mapping
Tools:
to
map,
extract
transform
and
load
data
• Accuracy:
Security,
data
cleansing,
error
correc.on
• Addi.onal
Modules:
Forecaster,
Dic.onary,
Scheduler,
Configura.on
database,
Rela.onships
•
Addi.onal
Applica.ons:
Maximo,
Cost
Loader,
fiXER
• Standards
and
Compliance:
uses
Primavera
API,
compa.ble
with
all
Primavera
versions
• Non-‐intrusive
to
IT
–
data
“firewall”
-‐
staging
table
approach
to
ERP
integra.on
Oracle
SAP
Maximo
Excel
ERP/EAM
Dashboards
KBIs,
BI
Contractor
Systems
Where
the
data
comes
from
What
it’s
used
for
–
Integrated
Planning
Writeback
To
ERP,
EAM
What
Legare
Does
24. • Chevron:
Impor.ng
XER
and
Excel
files
from
Contractor
to
build
an
Integrated
AcCvity
Plan,
Saved
one
day
planner
.me
per
week,
ie
20%.
• Maximo
User:
Integrated
plan
Maximo
to
P6.
Saved
one
planner
day/week
• EPC
Contractor:
• Migrated
data
from
bespoke
contractor
WMS
to
Primavera.
50
projects,
50,000
ac.vi.es,
500
ac.vity
code
dic.onaries,
12,000
resource
IDs
to
re-‐map
• Legare
interface
to
Oracle
Projects.
WBS
build
reduced
from
4
hours
to
33
minutes
• Large
Airport
Construc.on:
Hundreds
of
users
bucket
loading
cost
profiles
using
financial
periods
in
Primavera.
Now
Looking
at
XER,
Risk
Loading
for
P6.8,
Oracle
EBS
links.
• Downstream
Oilco:
Impor.ng
XER,
Excel
data
from
es.mators,
contractors
and
corporate
SAP
(10,000
actuals
per
week).
Also
removed
unwanted
data
from
XER,
mapping
codes
to
P6.
• Water
u.lity:
SAP,
Business
Objects
user,
reduced
by
20%
.me
spent
by
4
people
manually
culng
and
pas.ng
into
P6.
Legare
(3
users)
cost
inc.
implementaCon
£20k.
ROI
in
2
months
• Energy
U.lity:
uses
Legare
to
extract
and
load
calculated
P6
data
to
Cobra
,
Cognos,
etc
Typical
uses
of
Legare
25. Legare Cost and ROI
• Typical
.me
saving
is
one
day
per
planner
per
week
=
20%
• One
Legare
user
licence
costs
£5,000:
5
cost
£22,000
• One
Cost
Loader
licence
costs
£1,500:
5
cost
£6,000
• Annual
Support
=
20%
of
list
price
• Typical
customer
internal
business
cases:
pay
back
in
less
than
3
months
26. • Low Cost
• Low Risk
• Ease of use
• Supported
• Future proof
• Uses the Primavera APIs
Why Legare ?
27. Introduction to
Collabro
• We
provide
collabora.ve
services
for
70+
Oil
and
Gas
companies
UK
and
Worldwide.
• Offshore
personnel
logis.cs
(Vantage
POB),
environmental
repor.ng
&
project
management.
• Legare®
developed
to
meet
a
need
for
simple
integra.on
to
Primavera.
Vantage
POB/Legare
link
planned.
• Worldwide
partner
network.