Gürcan ORHAN presented steps for migrating from Oracle Warehouse Builder (OWB) to Oracle Data Integrator (ODI). He discussed preparing the environment, running the migration utility, and reviewing reports. Special cases during migration include mappings with multiple connections to the same operator or tables with multiple primary keys. If full migration is not possible, remaining OWB mappings can be called from ODI packages using the OdiStartOwbJob tool. The migration should be planned and executed incrementally in work packages.
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
In this tutorial, we cover the different deployment possibilities of the MySQL architecture depending on the business requirements for the data. We also deploy some architecture and see how to evolve to the next one.
The tutorial covers the new MySQL Solutions like InnoDB ReplicaSet, InnoDB Cluster, and InnoDB ClusterSet.
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
In this tutorial, we cover the different deployment possibilities of the MySQL architecture depending on the business requirements for the data. We also deploy some architecture and see how to evolve to the next one.
The tutorial covers the new MySQL Solutions like InnoDB ReplicaSet, InnoDB Cluster, and InnoDB ClusterSet.
Building Streaming Data Pipelines with Google Cloud Dataflow and Confluent Cl...HostedbyConfluent
We will demonstrate how easy it is to use Confluent Cloud as the data source of your Beam pipelines. You will learn how to process the information that comes from Confluent Cloud in real time, make transformations on such information and feed it back to your Kafka topics and other parts of your architecture.
Grafana Loki: like Prometheus, but for LogsMarco Pracucci
Loki is a horizontally-scalable, highly-available log aggregation system inspired by Prometheus. It is designed to be very cost-effective and easy to operate, as it does not index the contents of the logs, but rather labels for each log stream.
In this talk, we will introduce Loki, its architecture and the design trade-offs in an approachable way. We’ll both cover Loki and Promtail, the agent used to scrape local logs to push to Loki, including the Prometheus-style service discovery used to dynamically discover logs and attach metadata from applications running in a Kubernetes cluster.
Finally, we’ll show how to query logs with Grafana using LogQL - the Loki query language - and the latest Grafana features to easily build dashboards mixing metrics and logs.
MySQL 8.0 is the latest Generally Available version of MySQL. This session will help you upgrade from older versions, understand what utilities are available to make the process smoother and also understand what you need to bear in mind with the new version and considerations for possible behavior changes and solutions.
Automating Your Clone in E-Business Suite R12.2Michael Brown
It is possible to automate the cloning process in Oracle E-Business Suite 12.2. This presentation discusses how to accomplish that and gives some warnings about when it is not possible to run a clone.
For OAUG members, the slides and a recording of the presentation are available on www.oaug.org.
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
Developer’s guide to contributing code to Kafka with Mickael Maison and Tom B...HostedbyConfluent
Contributing code to an open source project can sometimes feel really difficult. The process is often different for each project and requires you to develop, build and test your change and then it needs to be accepted by the project. For Kafka, certain types of changes also require you to go through the Kafka Improvement Proposal (KIP) process.
In this talk, we will cover in detail the process to contribute code to Apache Kafka, from setting up a development environment, to building the code, running tests and opening a PR. We will also look at the KIP process, describe what each section of the document is for, the importance of finding consensus, and what happens when it gets voted. We will share, from a committers point of view, what we look for when reviewing a KIP and give some tips to help you get through the process successfully.
At the end of this talk, you will be able to get started contributing code to Kafka and understand how to get from idea, to KIP, to released feature.
Automate Oracle database patches and upgrades using Fleet Provisioning and Pa...Nelson Calero
Each new version of the Oracle database includes improvements in the upgrade and patching utilities, forcing us to update our procedures to incorporate these changes.
The Fleet Provisioning & Patching (FPP, formerly RHP) utility, together with the change in its licensing announced at OOW 2019 that makes it free in RAC, now makes it possible to centrally manage the software life cycle.
This presentation shows examples of how to use FPP and different configuration options.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
"Maximum Availability Architecture (MAA) for Oracle Database, Exadata and the Cloud" was first presented during Oracle Open World (OOW) 2019. This version of the deck has been updated for OOW London 2020 including the latest information regarding patching and upgrading the Oracle Database with Zero Downtime.
High Availability & Disaster Recovery on Oracle Cloud InfrastructureSinanPetrusToma
Critical applications have the requirement to run 24/7 and tolerate hardware and software failure and even complete data center outages. Oracle Cloud Infrastructure with its' regions, Availability Domains (AD), and Fault Domains (FD) provides the needed building blocks to design and run high availability and disaster recovery architectures for your applications and databases.
DB12c: All You Need to Know About the Resource ManagerAndrejs Vorobjovs
Resource Manager has changed a lot in Oracle Database 12c, especially if Oracle Multitenant is used. It can manage the available resources between the consumer groups in a single PDB as well as among all the PDBs. DBAs who are planning the upgrades or consolidations to Oracle Database 12c need to understand how the new resource manager works and how the existing resource management plans need to be changed to make them work in the new Oracle Multitenant configuration.
This paper will explain the differences between 11g and 12c resource manager, will dig into resource management features and limitations in 12c Oracle Multitenant, will provide guidelines for migrating your current resource management plan to 12c at the time of upgrade or consolidation, and will also reveal how much overhead the resource manager introduces.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Oracle Warehouse Builder to Oracle Data Integrator 12c Migration UtilityNoel Sidebotham
As Oracle Warehouse builder nears the end of extended support; customers need to consider their migration options.
In this webex we'll be discussing this topic and aim to answer questions like Which tool should I use for new projects? What should be done with existing implementations? And why should I migrate to ODI?
In this session You will learn about –
• Oracle Data Integrator 12c, concepts and features
• The OWB2ODI migration utility
• How to successfully migrate OWB projects to ODI
• You will hear about customer success stories
• New features of ODI 12c that are getting ETL developers excited including Big Data and Hybrid Cloud support.
The Time is Now! Migrating from OWB to ODI 12cStewart Bryson
Prior to the introduction of Data Integrator (ODI), Oracle had another data integration tool: Warehouse Builder (OWB). Usually positioned as an ETL tool, OWB excelled in environments with a strong footprint in the Oracle Database. Oracle's statement of direction has been clear: to deliver a unified data integration platform, combining the best from both tools into a true world class product. With ODI 12c, that day has arrived.
In this presentation, I’ll demonstrate the features available for migrating from OWB to ODI 12c. I’ll also describe a phased approach for doing a “right-time” conversion to ODI 12c, which involves migrating bite-sized chunks of OWB processes over to ODI when that migration adds legitimate value for the customer.
Building Streaming Data Pipelines with Google Cloud Dataflow and Confluent Cl...HostedbyConfluent
We will demonstrate how easy it is to use Confluent Cloud as the data source of your Beam pipelines. You will learn how to process the information that comes from Confluent Cloud in real time, make transformations on such information and feed it back to your Kafka topics and other parts of your architecture.
Grafana Loki: like Prometheus, but for LogsMarco Pracucci
Loki is a horizontally-scalable, highly-available log aggregation system inspired by Prometheus. It is designed to be very cost-effective and easy to operate, as it does not index the contents of the logs, but rather labels for each log stream.
In this talk, we will introduce Loki, its architecture and the design trade-offs in an approachable way. We’ll both cover Loki and Promtail, the agent used to scrape local logs to push to Loki, including the Prometheus-style service discovery used to dynamically discover logs and attach metadata from applications running in a Kubernetes cluster.
Finally, we’ll show how to query logs with Grafana using LogQL - the Loki query language - and the latest Grafana features to easily build dashboards mixing metrics and logs.
MySQL 8.0 is the latest Generally Available version of MySQL. This session will help you upgrade from older versions, understand what utilities are available to make the process smoother and also understand what you need to bear in mind with the new version and considerations for possible behavior changes and solutions.
Automating Your Clone in E-Business Suite R12.2Michael Brown
It is possible to automate the cloning process in Oracle E-Business Suite 12.2. This presentation discusses how to accomplish that and gives some warnings about when it is not possible to run a clone.
For OAUG members, the slides and a recording of the presentation are available on www.oaug.org.
This presentation is based on Lawrence To's Maximum Availability Architecture (MAA) Oracle Open World Presentation talking about the latest updates on high availability (HA) best practices across multiple architectures, features and products in Oracle Database 19c. It considers all workloads, OLTP, DWH and analytics, mixed workload as well as on-premises and cloud-based deployments.
Developer’s guide to contributing code to Kafka with Mickael Maison and Tom B...HostedbyConfluent
Contributing code to an open source project can sometimes feel really difficult. The process is often different for each project and requires you to develop, build and test your change and then it needs to be accepted by the project. For Kafka, certain types of changes also require you to go through the Kafka Improvement Proposal (KIP) process.
In this talk, we will cover in detail the process to contribute code to Apache Kafka, from setting up a development environment, to building the code, running tests and opening a PR. We will also look at the KIP process, describe what each section of the document is for, the importance of finding consensus, and what happens when it gets voted. We will share, from a committers point of view, what we look for when reviewing a KIP and give some tips to help you get through the process successfully.
At the end of this talk, you will be able to get started contributing code to Kafka and understand how to get from idea, to KIP, to released feature.
Automate Oracle database patches and upgrades using Fleet Provisioning and Pa...Nelson Calero
Each new version of the Oracle database includes improvements in the upgrade and patching utilities, forcing us to update our procedures to incorporate these changes.
The Fleet Provisioning & Patching (FPP, formerly RHP) utility, together with the change in its licensing announced at OOW 2019 that makes it free in RAC, now makes it possible to centrally manage the software life cycle.
This presentation shows examples of how to use FPP and different configuration options.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
Building a Data Pipeline using Apache Airflow (on AWS / GCP)Yohei Onishi
This is the slide I presented at PyCon SG 2019. I talked about overview of Airflow and how we can use Airflow and the other data engineering services on AWS and GCP to build data pipelines.
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
"Maximum Availability Architecture (MAA) for Oracle Database, Exadata and the Cloud" was first presented during Oracle Open World (OOW) 2019. This version of the deck has been updated for OOW London 2020 including the latest information regarding patching and upgrading the Oracle Database with Zero Downtime.
High Availability & Disaster Recovery on Oracle Cloud InfrastructureSinanPetrusToma
Critical applications have the requirement to run 24/7 and tolerate hardware and software failure and even complete data center outages. Oracle Cloud Infrastructure with its' regions, Availability Domains (AD), and Fault Domains (FD) provides the needed building blocks to design and run high availability and disaster recovery architectures for your applications and databases.
DB12c: All You Need to Know About the Resource ManagerAndrejs Vorobjovs
Resource Manager has changed a lot in Oracle Database 12c, especially if Oracle Multitenant is used. It can manage the available resources between the consumer groups in a single PDB as well as among all the PDBs. DBAs who are planning the upgrades or consolidations to Oracle Database 12c need to understand how the new resource manager works and how the existing resource management plans need to be changed to make them work in the new Oracle Multitenant configuration.
This paper will explain the differences between 11g and 12c resource manager, will dig into resource management features and limitations in 12c Oracle Multitenant, will provide guidelines for migrating your current resource management plan to 12c at the time of upgrade or consolidation, and will also reveal how much overhead the resource manager introduces.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Oracle Warehouse Builder to Oracle Data Integrator 12c Migration UtilityNoel Sidebotham
As Oracle Warehouse builder nears the end of extended support; customers need to consider their migration options.
In this webex we'll be discussing this topic and aim to answer questions like Which tool should I use for new projects? What should be done with existing implementations? And why should I migrate to ODI?
In this session You will learn about –
• Oracle Data Integrator 12c, concepts and features
• The OWB2ODI migration utility
• How to successfully migrate OWB projects to ODI
• You will hear about customer success stories
• New features of ODI 12c that are getting ETL developers excited including Big Data and Hybrid Cloud support.
The Time is Now! Migrating from OWB to ODI 12cStewart Bryson
Prior to the introduction of Data Integrator (ODI), Oracle had another data integration tool: Warehouse Builder (OWB). Usually positioned as an ETL tool, OWB excelled in environments with a strong footprint in the Oracle Database. Oracle's statement of direction has been clear: to deliver a unified data integration platform, combining the best from both tools into a true world class product. With ODI 12c, that day has arrived.
In this presentation, I’ll demonstrate the features available for migrating from OWB to ODI 12c. I’ll also describe a phased approach for doing a “right-time” conversion to ODI 12c, which involves migrating bite-sized chunks of OWB processes over to ODI when that migration adds legitimate value for the customer.
Oracle Data Integrator (ODI) seems to be slow when it is installed out-of-the-box, since it has to comply with different versions of the databases and operating systems. The default installation is generally not the optimal choice. ODI is a flexible product, that can be customized for specific requirements and to implement new features of the database or operating systems. Attendees will learn how to easily create a customized ODI environment.
This presentation will demonstrate the flexibility of the Knowledge Module, configuration best practices and the best query response time tips and techniques depending on complex business requirements. It will include information about how to load an extensive number of files quickly with a special algorithm, as well as how to define new customized data types, analytical and database functions, archiving ODI logs in a timely fashion and using Oracle HINTS in a variabled and static way due to business and IT needs.
Services are one of the most underutilized features of the Oracle Database. This presentation shows some use cases that may make you change your mind and motivate to implement services in one way or another.
What Oracle Warehouse Builder to Oracle Data Integrator migration service consists of and how it's carried out.
Hear about all the benefits it generates for companies.
The Time is Now: Migrating from Oracle Warehouse Builder to Oracle Data Integ...Stewart Bryson
Prior to the introduction of Data Integrator (ODI), Oracle had another data integration tool: Warehouse Builder (OWB). Usually positioned as an ETL tool, OWB excelled in environments with a strong footprint in the Oracle Database. Oracle's statement of direction has been clear: to deliver a unified data integration platform, combining the best from both tools into a true world class product. With ODI 12c, that day has arrived.
In this presentation, I’ll demonstrate the features available for migrating from OWB to ODI 12c. I’ll also describe a phased approach for doing a “right-time” conversion to ODI 12c, which involves migrating bite-sized chunks of OWB processes over to ODI when that migration adds legitimate value for the customer.
Database & Technology has developed a solution known as the OWB2ODI Converter to reduce the times and costs for migrating from Oracle Warehouse Builder to Oracle Data Integrator to a minimum.
The OWB2ODI Converter is a semi-automatic tool for converting OWB projects into ODI projects. The tool was specifically designed to recreate the logic implemented in the OWB project in the new logic of the ODI project, while keeping the semantics and functions implemented in the initial project unchanged.
Tests carried out on company projects showed that the time required for migration was drastically reduced by using the OWB2ODI Converter.
After a careful analysis of the results, it became clear that the benefits gained from using the OWB2ODI Converter tool, rather than manual conversion, increase in proportion to the size and complexity of the project.
Hitchhiker's Guide to free Oracle tuning toolsBjoern Rost
Instance and SQL tuning with EM12c Cloud Control is so easy, it is not even much fun
anymore. Also, not every customer may have the appropriate license or database
edition, or all you have available remotely is a command-line login to a database.
This presentation showcases a few open-source database tuning tools such as Snapper
and ASH replacements that DBAs can use to gather and review metrics and wait events
from the command line and even in standard edition.
How to Handle DEV&TEST&PROD for Oracle Data IntegratorGurcan Orhan
Most of us have development teams apart from test and operation teams using the different repository environments. And there are generally 3 different ODI installations and repositories which each of the teams use separately. Chaos is usually expected and happened who will test which development and what to deploy into production.
In this session hear how ODI can handle your development hierarchy with ease of usage and in simplified/synchronized way for successful deployments.
A simple project will be built up and will be enlarged to enterprise level step by step.
Oracle SQL tuning with SQL Plan ManagementBjoern Rost
Regression in SQL plans are a frequent cause for performance related incidents when the cost-based optimizer comes up with a new plan due to changes in data distribution, statistics, or binds. While most organizations have very strict processes for changes to applications or infrastructure, the CBO is most often left alone, accepting that SQL execution performance could change at any time. But with SQL Plan Management it does not take much effort to implement a process that makes changes to SQL plans manageable. It starts with monitoring regression in execution times, capturing baselines, auto pre-evaluating potentially better plans, and documenting information needed to accept the change. We will not only cover how SPM works, but also how you can start using it in your organization today.
UKOUG Tech 15 - Migration from Oracle Warehouse Builder to Oracle Data Integr...Jérôme Françoisse
Oracle Data Integrator is the strategic Data Integration tool replacing Oracle Warehouse Builder, offering more flexibility and supporting more technologies. Based on the Eurocontrol – the European Organisation for the Safety of Air Navigation – story, we will review a migration from Oracle Warehouse Builder to Oracle Data Integrator using the migration utility and custom scripting. After looking at the roadmap and the architecture used, we will see how to automatically migrate supported components how to handle the remaining ones and what needs to be fine-tuned. Finally we will talk about the testing, the challenges, the risks and the lessons learnt so you will be ready to successfully achieve such a migration for your company.
How to solve complex business requirements with Oracle Data Integrator?Gurcan Orhan
Business requirements are always hard to implement, develop, operate and always changeable. In this session attendees will have some fact examples of turning unstructured data into structural meaning, writing complex queries without typing anything, adding function based joins, implementing CTAS (Create Table As Select) and IAS (Insert As Select) methods and simplifying business rules, writing optimized queries to decrease operational and development costs as well as the faster loads.
In this presentation, see how you can solve some of complex business requirements with Oracle Data Integrator's flexibility and ease of usage features.
MV2ADB - Move to Oracle Autonomous Database in One-clickRuggero Citton
Move to Autonomous Database (MV2ADB) is a new tool is permitting the load data and migration from “on premises” to Autonomous Database Cloud leveraging on Oracle Data Pump and within one command. You can save your data to your Cloud Object Store and to load them to Autonomous Database Cloud using “mv2adb”.
DMU is the new tool introduced by Oracle for database conversion to the Unicode character set. Beside introducing briefly the tool, this session will focus on a real database conversion scenario faced by a customer, the problems encountered and the solutions.
Get the most out of Oracle Data Guard - POUG versionLudovico Caldara
If you use Oracle Data Guard feature just for data protection, you are using less than half of its potential. You already pay for it, so why not getting the most out of it? In this session I will show how you can use Oracle Data Guard capabilities for common tasks such as database cloning, database migration and reporting, with the help of other features included in Oracle Database Enterprise Edition
The AMIS Report from Oracle Open World and JavaOne 2011 - Part OneLucas Jellema
The first part of the report from the AMIS team on their findings of Oracle Open World 2011 and JavaOne 2011. With the major announcements, the roadmaps, highlights and disappointments, some gold nuggets and personal bests and a general impression of where Oracle, the industry trends and the technology are going.
Technical presentation
- The best way to collect, document and audit, viewing graphically or with multiple output formats including
- Run Books of your existing z/OS IT enterprise from a user oriented system of request.
- Automate your documentation for your z/OS or multiplatform IT system (UNIX & WINDOWS)
- Our Clients Server solutions are portable and work under Z/OS, Windows and UNIX
- Partnerships with BMC, CA Tech, HP, and IBM
- Build your production repository by using a powerful reverse documentation process.
- Copy utility allowing for easy z/OS PDS transfers before collection and audit under windows.
- Automated consolidation maintains current repository.
Specialties
Reverse Documentation Z/OS, JCL, Script & Materials Generation, Production Repository, iCAN Products line
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
2. Who Am I ?
+20 years of IT experience.
+14 years of DWH experience.
+10 years of Oracle Data Integrator experience.
+8 years of Oracle Warehouse Builder experience.
Sybase Power Designer, ERwin Data Modeler, SDDM
OBIEE, Cognos, Microstrategy, Business Objects, Qlikview, Tableau
IBM Data Stage, SAP Data Services, Informatica, etc…
Oracle Excellence Awards - Technologist of the Year 2011 :
Enterprise Architect
DWH & BI Chair : TROUG (Turkish Oracle User Group)
Published Customer Snapshot for NODI @Oracle.com
Published videos about ODI @Oracle.com (Oracle Media Network)
Published OTN Podcasts about
“Data Warehousing and ODI”
“ODI and the Evolution of Data Integration”
3 different “2MTT”s
Articles in OTech Magazine, SearchSoftwareQuality.com
Annual panelist for ODTUG “Ask the Experts Panel : ODI”
Presenter in OOW since 2010 (7 times in a row ⭐ )
Presenter in many OUG conferences in globe
Presenter in various universities in Turkey
16 JUNE 2017 / #OGHTECH17 2
3. Ekol Germany
Warehousing
Solutions
begin with the
Kardelen Facility
1996 2003 2010 2012 2014 2016
201520132011200820021990
Acquire STS Int.
Transport
Ekol Bosnia
Ekol France
Ekol Greece
Ekol Ukraine
Ekol Spain
Ekol Bulgaria
Ekol Czech Rep.
Ekol Iran
Ekol PolandEkol Italy
Ekol Romania
Ekol HungaryAcquire
Unok/Unatsan
Rainbow
Replaced by
Quadro
(software)
Intermodal
operations Ro-Ro
operations
Established
Ekol Milestones
9. 16 JUNE 2017 / #OGHTECH17
Some Facts
ODI is the strategic product for heterogeneous data
integration as declared in “statement of direction”
No major releases in Oracle Warehouse Builder
(latest release is 11GR2 - 11.2.0.4)
OWB will not be shipped with database 12c
No OWB documentation included in database 12c
OWB Support continues 😊
OWB is still supported by database 12c 😊
OWB 11.2.0.3 + CP2 is certified with database 12c 😊
OWB is not supported by Cloud environment
11. 16 JUNE 2017 / #OGHTECH17
REQUIREMENTS (safe harbour)
Oracle Warehouse Builder
- version 11.2.0.4
(plus patch # 18537208)
(plus patch # 21687102)
Oracle Data Integrator
- version 12.1.3.x.x or 12.2.1.x.x
2 Oracle Database instances
recommendation : 11G R2 (11.2.0.4)
A Linux based server
- version 11.2.0.3
(plus CP3 #16568042)
12. 16 JUNE 2017 / #OGHTECH17
Build Up Laboratory Environment (safe harbour)
SOURCE TARGET
OWB
ODI
Run your OWB jobs initially
before starting migration
Migration Utility is a command-line tool which runs on OWB installation
directory, migrates design-time metadata.
Linux 64-Bit or Windows 64-Bit StandAlone ODI Agent (with patch
17224695 for Migration Utility)
Upgrade your OWB, ODI
installations to required versions
13. 16 JUNE 2017 / #OGHTECH17
When your manager asks you…
19. 16 JUNE 2017 / #OGHTECH17
NOT Supported OWB Objects (Limitations)
table
(partitions
attribute sets,
data rules)
dimensional
modeling
metadata
custom
PL/SQL
(procedure,
package,
and so on)
user-defined types
streams
CDCconfigurations
process
flow
mappings using dimension and
cube
nameandaddress
match-merge
data rules
data auditors
expand
configuration details
(security,
user extensions,
transportable modules,
schedules/collections,
user folders)
OMB*Plusscripts
data profiles
materialized view
(partitions,
attribute sets,
data rules)
27. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
2 Operators connected to same operator
28. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Tables with multiple Primary Keys
if target table has multiple primary keys, since only
one primary key is allowed in ODI, the redundant
primary keys will be migrated as alternate keys.
29. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Multiple operators connected from and to same operator
30. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Lookup operator has a constant as input
31. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Lookup Operators Have No Driver Table (Mapping Is Invalid)
32. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Multiple operators connected to same operator, some with no
upstream source
33. 16 JUNE 2017 / #OGHTECH17
Special Cases During Migration
Multiple operators connected to same operator, all with
different upstream operator
35. 16 JUNE 2017 / #OGHTECH17
Planning to Decide HOW
Manual jobs after “successful” migration
User Folders
Packages
Scheduling
User Defined Data Types
Model Folders
36. 16 JUNE 2017 / #OGHTECH17
Recommendations
Migrate everything and see what is migrating, what is not
Run everything without source data
Rollback everything (a fresh new ODI
12c)
Create folders, model folders in ODI 12c
Start migration per folder
Check migration logs for errors
Correct issues for mappings with error
Re-run ODI 12c with source data
42. 16 JUNE 2017 / #OGHTECH17
1. Call OWB Mappings from ODI Packages
Create an ODI package with
many ‘OdiStartOwbJob’
tool.
Run everything in ODI, but call
OWB mappings in right order
and sequence (synchronous /
asynchronous).
42
43. 16 JUNE 2017 / #OGHTECH17
Divide into Work Packages (Agile Methodology)
Divide OWB
mappings into Work
Packages
Perform Migration for
WP #n
Correct errors, rewrite if
not migratedChange
OdiStartOwbJob to ODI
object.
44. 16 JUNE 2017 / #OGHTECH17
Step by Step Migration Facts
PASSED : Include only objects that succeeded. FAILED : Include only objects that failed. ALL : Include all objects.
Mode :
FAST_CHECK : The migration utility performs a quick check for selected objects and provides a report that lists objects that can and cannot be migrated to the target ODI repository. Use this mode to quickly determine which objects can and cannot be migrated.
DRY_RUN : The migration utility checks whether the specified objects can be created in the target ODI repository, and executes the migration without committing the objects to the repository. This mode provides more information than FAST_CHECK. Use this mode to more completely determine which objects can and cannot be migrated.
RUN {default}: The migration utility executes the migration and commits migrated objects to the target ODI repository. Use this mode to perform the migration from OWB to ODI.
SPLIT JOIN : Indicates whether to split the join operator to binary join when the property Use ANSI Syntax of the OWB mapping is set to TRUE.
OUTBOUND OPERATOR : When set to TRUE, mappings that contain unbound operators are migrated. For unbound entity operators (external table, table, view, materialized view, and lookup), an ODI datastore corresponding to the unbound operator is created in the ODI model. For an unbound pluggable mapping operator, an ODI reusable mapping is created in an ODI folder named STAND_ALONE.
MIGRATION OBJECTS : Project, Folder or a single object name. Use semicolumn to concatenate. Asterisk (*) for all objects starting/ending with the name.
LOG : The migration utility log file contains details about objects that were migrated, and error messages if any errors occurred.
REPORT : The migration utility exclusion report contains a summary of the objects migrated, and lists whether migration succeeded or failed for each object.
The following figure shows an OWB mapping for which operators EMP and EXPRESSION are both connected to operator TGT_EMP through the same map attribute group INOUTGRP1. This is not allowed in ODI, because each input connector point in ODI can only be connected once.
OWB tables are migrated to ODI data stores. In OWB, tables can have multiple primary keys. In ODI, data stores can have only one primary key. In the case of multiple primary keys, the first primary key is migrated as the primary key in ODI, and the others are migrated as alternate keys.
When this situation occurs, the following warning message is written to the migration utility log file.
The following figure shows an OWB mapping for which operators FILTER and EXPRESSION are both connected to operator TGT_EMP through the same map attribute group INOUTGRP1. This is not allowed in ODI.
During migration, FILTER and EXPRESSION operators are chained together to ensure that only one is connected to TGT_EMP. As a result, the ODI mapping may be EMP > FILTER > EXPRESSION > TGT_EMP or EMP > EXPRESSION > FILTER > TGT_EMP.
The following figure shows an OWB mapping for which the Lookup operator has no upstream source operator, and is only connected from a constant.
The OWB mapping in the preceding figure is migrated to the ODI mapping in the following figure (DEP is the lookup table of the Lookup operator).
The constant operator CONSTANT in the OWB mapping is not migrated to any map component in ODI. Instead, the expression of the constant attribute is migrated, and that expression is set on the Lookup component.
For example, in OWB, if the expression of the attribute CONSTANT.OUTGRP1.NO is set to 5, and the lookup condition of LOOKUP_DEPT is OUTGRP1.DEPTNO = INGRP1.NO, then after migration the lookup condition of LOOKUP_DEPT in ODI is DEP.DEPTNO = 5.
The following figure shows an OWB mapping for which several Lookup operators are connected to operator TGT_EMP, but some of the Lookup operators have no upstream operators as driver tables. This mapping is invalid, but will also be migrated. Only one map component can be connected to TGT_EMP in ODI. As a result, Lookup operators without driver tables will lose the connection to operator TGT_EMP.
Note that expressions for the target attributes are migrated, even though these two lookup components are not connected.
The following figure shows an OWB mapping for which two operators are connected to the same operator TGT_EMP. The EXPRESSION operator has an upstream source operator, while the JOINER operator does not. Only one map component can be connected to TGT_EMP in ODI. As a result, the operator with no upstream source operator will lose the connection to TGT_EMP.
The OWB mapping in the preceding figure is migrated to the ODI mapping in the following figure.
The following figure shows an OWB mapping for which two operators are connected to the same operator TGT_EMP. Both operators have an upstream operator. Only one map component can be connected to TGT_EMP in ODI. As a result, one operator will lose the connection to TGT_EMP.
The OWB mapping in the preceding figure is migrated to one of the ODI mappings in the following figures.
WORKSPACE : Logical schema of the OWB Runtime Repository technology. This resolves to a physical schema that represents the OWB workspace that contains the OWB object to be executed. OWB workspace was chosen when you added a Physical Schema under the OWB Runtime Repository DataServer in Topology Navigator.
LOCATION : Name of the OWB location that contains the OWB object to be executed. This location must exist in the physical workspace that resolves from -WORKSPACE.
OBJECT_NAME : Name of the OWB object. This object must exist in -LOCATION.
OBJECT_TYPE : Type of OWB object : PLSQLMAP, PROCESSFLOW, SQLLOADERCONTROLFILE, MAPPING, DATAAUDITOR, ABAPFILE.
EXEC_PARAMS : Custom and/or system parameters for the OWB execution.
CONTEXT : Execution context of the OWB object. This is the context in which the logical workspace will be resolved. Studio editors use this value or the Default Context. Execution uses this value or the Parent Session context.
LOG_LEVEL : Log level (0-5). Default is 5, which means that maximum details are captured in the log.
SYNC_MODE : Synchronization mode of the OWB job => 1: Synchronous (Default), Asynchronous
POLLINT : The period of time in milliseconds to wait between each transfer of OWB audit data to ODI log tables. The default value is 0, which means that audit data is transferred at the end of the execution.
SESSION_NAME : Name of the OWB session as it appears in the log.
KEYWORDS : Comma-separated list of keywords attached to the session.
OWB PARAMS : List of values for the OWB parameters relevant to the object. This list is of the form -PARAM_NAME=value. OWB system parameters should be prefixed by OWB_SYSTEM, for example, OWB_SYSTEM.AUDIT_LEVEL.