Manage Your Data, Using Oracle Big Data Appliance - Tips & Tricksngest, process and manage the data, using Oracle Big Data Appliance (end-to-end BigData solution from Oracle):
- Oracle BDA architecture and componets overview - Oracle platform, Cloudera CDH, Clodera Manager and specific Oracle components;
- Advantages and additional value of an Oracle BDA;
- Challenges, faced inside whole stack (BDA, Cloudera);
- Challenges, which came from original Hadoop EcoSystem;
- Customer case (anonymized): how to utilize a power of an Oracle BDA, including external Informatica Big Data Management tool.
Data Modeling a Scheduling App (Adam Hutson, DataScale) | Cassandra Summit 2016DataStax
The task of a data modeler is to create order out of chaos without excessively distorting the truth. The finished product should be data model that describes the structure, manipulation and integrity aspects of the data to be stored. To properly create a data model, the modeler will transform said chaos through three distinct stages. The first is a Conceptual Data Model, then a Logical Data Model, and lastly, a Physical Data Model. This presentation will cover all three stages. It will also walk through the creation of a full blown data model for a service scheduling application to be built with a Cassandra backend. This presentation is based on a 3 part blog series I wrote for DataScale.
About the Speaker
Adam Hutson Data Architect, DataScale
Adam is Data Architect for DataScale, Inc. He is a seasoned data professional with experience designing & developing large-scale, high-volume database systems. Adam previously spent four years as Senior Data Engineer for Expedia building a distributed Hotel Search using Cassandra 1.1 in AWS. Having worked with Cassandra since version 0.8, he was early to recognize the value Cassandra adds to Enterprise data storage. Adam is also a DataStax Certified Cassandra Developer.
In this slidecast, Ramesh Menon from Cray introduces the Urika-XA Advanced Analytics Platform. The Cray Urika-XA system provides customers with the benefits of a turnkey analytics appliance combined with a flexible, open platform that can be modified for future analytics workloads.
http://cray.com
Watch the video presentation: http://wp.me/p3RLHQ-dd7
Cassandra Summit: Data Modeling A Scheduling AppAdam Hutson
The task of a data modeler is to create order out of chaos without excessively distorting the truth. The finished product should be data model that describes the structure, manipulation and integrity aspects of the data to be stored. To properly create a data model, the modeler will transform said chaos through three distinct stages. The first is a Conceptual Data Model, then a Logical Data Model, and lastly, a Physical Data Model. This presentation will cover all three stages. It will also walk through the creation of a full blown data model for a service scheduling application to be built with a Cassandra backend. This presentation is based on a 3 part blog series I wrote for DataScale.
Primary and Clustering Keys should be one of the very first things you learn about when modeling Cassandra data. Most people coming from a relational background automatically think, "Yeah, I know what a Primary Key is", and gloss right over it. Because of this, there always seems to be a lot of confusion around the topic of Primary Keys in Cassandra. This presentation will demystify that confusion. I will cover what the different types of Keys are, how they can be used, what their purpose is, and how they affect your queries.
For this presentation, I will be using CrossFit gym locations as my subject matter. I will explain the differences between Primary Keys, Compound Keys, Clustering Keys, & Composite Keys. I will also show how the data behind each type differs as stored on disk. Lastly, I will show what queries each type of key will support.
Real World Use Case with Cassandra (Eddie Satterly, DataNexus) | C* Summit 2016DataStax
Discuss how IAG, Australia's leading Insurance company, uses Cassandra to change the way they integrate and leverage their data. First use case is in moving from Silos of data to enabling near real time analytics on critical digital application data leveraging kafka for data flows to Cassandra. Then leveraging Solr and Spark nodes to enable real time search and analytics. Second use case is in collecting new telemetry and telematics data to enable real time feedback to customers that initially shows 7-20% fuel savings for gas vehicles. This will also help customers to avoid heavy accident risk areas of cities during peak times and eventually suggest alternate routes and commute timing to reduce risk.
About the Speaker
Eddie Satterly Co-Founder, DataNexus
Eddie has served in a variety of roles including developer, engineer, architect and CTO over his 27+ year career. At DataNexus he is building a new big data application. Previously he was CTO of the Emerging Technologies at CSC running product portfolio and R&D teams in: Cyber, Analytics, Cloud, Mobile & IoT. Prior in CTO Office at Splunk where he presented at events and set data strategy. Prior he revolutionized how Expedia delivers service to improve user experience using Cassandra. MVP
Data Modeling a Scheduling App (Adam Hutson, DataScale) | Cassandra Summit 2016DataStax
The task of a data modeler is to create order out of chaos without excessively distorting the truth. The finished product should be data model that describes the structure, manipulation and integrity aspects of the data to be stored. To properly create a data model, the modeler will transform said chaos through three distinct stages. The first is a Conceptual Data Model, then a Logical Data Model, and lastly, a Physical Data Model. This presentation will cover all three stages. It will also walk through the creation of a full blown data model for a service scheduling application to be built with a Cassandra backend. This presentation is based on a 3 part blog series I wrote for DataScale.
About the Speaker
Adam Hutson Data Architect, DataScale
Adam is Data Architect for DataScale, Inc. He is a seasoned data professional with experience designing & developing large-scale, high-volume database systems. Adam previously spent four years as Senior Data Engineer for Expedia building a distributed Hotel Search using Cassandra 1.1 in AWS. Having worked with Cassandra since version 0.8, he was early to recognize the value Cassandra adds to Enterprise data storage. Adam is also a DataStax Certified Cassandra Developer.
In this slidecast, Ramesh Menon from Cray introduces the Urika-XA Advanced Analytics Platform. The Cray Urika-XA system provides customers with the benefits of a turnkey analytics appliance combined with a flexible, open platform that can be modified for future analytics workloads.
http://cray.com
Watch the video presentation: http://wp.me/p3RLHQ-dd7
Cassandra Summit: Data Modeling A Scheduling AppAdam Hutson
The task of a data modeler is to create order out of chaos without excessively distorting the truth. The finished product should be data model that describes the structure, manipulation and integrity aspects of the data to be stored. To properly create a data model, the modeler will transform said chaos through three distinct stages. The first is a Conceptual Data Model, then a Logical Data Model, and lastly, a Physical Data Model. This presentation will cover all three stages. It will also walk through the creation of a full blown data model for a service scheduling application to be built with a Cassandra backend. This presentation is based on a 3 part blog series I wrote for DataScale.
Primary and Clustering Keys should be one of the very first things you learn about when modeling Cassandra data. Most people coming from a relational background automatically think, "Yeah, I know what a Primary Key is", and gloss right over it. Because of this, there always seems to be a lot of confusion around the topic of Primary Keys in Cassandra. This presentation will demystify that confusion. I will cover what the different types of Keys are, how they can be used, what their purpose is, and how they affect your queries.
For this presentation, I will be using CrossFit gym locations as my subject matter. I will explain the differences between Primary Keys, Compound Keys, Clustering Keys, & Composite Keys. I will also show how the data behind each type differs as stored on disk. Lastly, I will show what queries each type of key will support.
Real World Use Case with Cassandra (Eddie Satterly, DataNexus) | C* Summit 2016DataStax
Discuss how IAG, Australia's leading Insurance company, uses Cassandra to change the way they integrate and leverage their data. First use case is in moving from Silos of data to enabling near real time analytics on critical digital application data leveraging kafka for data flows to Cassandra. Then leveraging Solr and Spark nodes to enable real time search and analytics. Second use case is in collecting new telemetry and telematics data to enable real time feedback to customers that initially shows 7-20% fuel savings for gas vehicles. This will also help customers to avoid heavy accident risk areas of cities during peak times and eventually suggest alternate routes and commute timing to reduce risk.
About the Speaker
Eddie Satterly Co-Founder, DataNexus
Eddie has served in a variety of roles including developer, engineer, architect and CTO over his 27+ year career. At DataNexus he is building a new big data application. Previously he was CTO of the Emerging Technologies at CSC running product portfolio and R&D teams in: Cyber, Analytics, Cloud, Mobile & IoT. Prior in CTO Office at Splunk where he presented at events and set data strategy. Prior he revolutionized how Expedia delivers service to improve user experience using Cassandra. MVP
Glassbeam: Ad-hoc Analytics on Internet of Complex Things with Apache Cassand...DataStax Academy
Learn how Apache Cassandra can be paired with Apache Spark to build a powerful OLAP solution for analyzing large scale operational data from Internet of Complex Things. Cassandra provides a scalable, flexible, and highly available data store. Spark provides a fast and scalable analytics framework. The two can be combined to build a powerful and flexible solution for analyzing IoT data. We will discuss what kind of analytics Cassandra can handle by itself and when Spark is required. We will also share some of the challenges and trade-offs that are important to know.
How jKool Analyzes Streaming Data in Real Time with DataStaxDataStax
In this webinar, Charles Rich, VP of Product Management at jKool will share their journey with DataStax; how jKool knew from the start that traditional relational databases wouldn’t work for the scalability and availability demands of time-series data, and why they turned to DataStax Enterprise for blazing performance and powerful enterprise search and analytics capabilities.
Building the Modern Data Hub: Beyond the Traditional Enterprise Data WarehouseFormant
Datavail and SlamData present on how to use NoSQL technologies (MongoDB and SlamData) to build a Data Hub -- the fast and easy way to real-time business insight.
Building a Next-gen Data Platform and Leveraging the OSS Ecosystem for Easy W...StampedeCon
This session will be a detailed recount of the design, implementation, and launch of the next-generation Shutterstock Data Platform, with strong emphasis on conveying clear, understandable learnings that can be transferred to your own organizations and projects. This platform was architected around the prevailing use of Kafka as a highly-scalable central data hub for shipping data across your organization in batch or streaming fashion. It also relies heavily on Avro as a serialization format and a global schema registry to provide structure that greatly improves quality and usability of our data sets, while also allowing the flexibility to evolve schemas and maintain backwards compatibility.
As a company, Shutterstock has always focused heavily on leveraging open source technologies in developing its products and infrastructure, and open source has been a driving force in big data more so than almost any other software sub-sector. With this plethora of constantly evolving data technologies, it can be a daunting task to select the right tool for your problem. We will discuss our approach for choosing specific existing technologies and when we made decisions to invest time in home-grown components and solutions.
We will cover advantages and the engineering process of developing language-agnostic APIs for publishing to and consuming from the data platform. These APIs can power some very interesting streaming analytics solutions that are easily accessible to teams across our engineering organization.
We will also discuss some of the massive advantages a global schema for your data provides for downstream ETL and data analytics. ETL into Hadoop and creation and maintenance of Hive databases and tables becomes much more reliable and easily automated with historically compatible schemas. To complement this schema-based approach, we will cover results of performance testing various file formats and compression schemes in Hadoop and Hive, the massive performance benefits you can gain in analytical workloads by leveraging highly optimized columnar file formats such as ORC and Parquet, and how you can use good old fashioned Hive as a tool for easily and efficiently converting exiting datasets into these formats.
Finally, we will cover lessons learned in launching this platform across our organization, future improvements and further design, and the need for data engineers to understand and speak the languages of data scientists and web, infrastructure, and network engineers.
Columbia Migrates from Legacy Data Warehouse to an Open Data Platform with De...Databricks
Columbia is a data-driven enterprise, integrating data from all line-of-business-systems to manage its wholesale and retail businesses. This includes integrating real-time and batch data to better manage purchase orders and generate accurate consumer demand forecasts.
Breakout: Hadoop and the Operational Data StoreCloudera, Inc.
As disparate data volumes continue to be operationalized across the enterprise, data will need to be processed, cleansed, transformed, and made available to end users at greater speeds. Traditional ODS systems run into issues when trying to process large data volumes causing operations to be backed up, data to be archived, and ETL/ ELT processes to fail. Join this breakout to learn how to battle these issues.
Can My Inventory Survive Eventual Consistency?DataStax
Let’s explore an inventory use case and discover how it’s possible to use an eventually consistent data store like Cassandra / DSE to support scalability, consistency, continuous availability. Combine that with analytics and I’ll show you how to build an inventory system for the future.
The Vision & Challenge of Applied Machine LearningCloudera, Inc.
Learn how Cloudera provides a unified platform that breaks down data silos commonly seen in organizations. By unifying the data needed for applied machine learning, organizations are better equipped to gather valuable insights from their data.
Introduction to Kudu - StampedeCon 2016StampedeCon
Over the past several years, the Hadoop ecosystem has made great strides in its real-time access capabilities, narrowing the gap compared to traditional database technologies. With systems such as Impala and Spark, analysts can now run complex queries or jobs over large datasets within a matter of seconds. With systems such as Apache HBase and Apache Phoenix, applications can achieve millisecond-scale random access to arbitrarily-sized datasets.
Despite these advances, some important gaps remain that prevent many applications from transitioning to Hadoop-based architectures. Users are often caught between a rock and a hard place: columnar formats such as Apache Parquet offer extremely fast scan rates for analytics, but little to no ability for real-time modification or row-by-row indexed access. Online systems such as HBase offer very fast random access, but scan rates that are too slow for large scale data warehousing workloads.
This talk will investigate the trade-offs between real-time transactional access and fast analytic performance from the perspective of storage engine internals. It will also describe Kudu, the new addition to the open source Hadoop ecosystem that fills the gap described above, complementing HDFS and HBase to provide a new option to achieve fast scans and fast random access from a single API.
Actionable Insights with AI - Snowflake for Data ScienceHarald Erb
Talk @ ScaleUp 360° AI Infrastructures DACH, 2021: Data scientists spend 80% and more of their time searching for and preparing data. This talk explains Snowflake’s Platform capabilities like near-unlimited data storage and instant and near-infinite compute resources and how the platform can be used to seamlessly integrate and support the machine learning libraries and tools data scientists rely on.
Pentaho - Jake Cornelius - Hadoop World 2010Cloudera, Inc.
Putting Analytics in Big Data Analytics
Jake Cornelius
Director of Product Management, Pentaho Corporation
Learn more @ http://www.cloudera.com/hadoop/
Webinar - The Agility Challenge - Powering Cloud Apps with Multi-Model & Mixe...DataStax
Building and managing cloud applications is not easy. Teams come face to face with these challenges: agility, manageability, performance, scalability, continuous availability and of course, security. Join us for “The Agility Challenge: Powering Cloud Applications with Multi-Model & Mixed Workloads” webinar where we will deep dive into challenges customers face with multiple data models such as graph, mixed workloads and how DataStax Enterprise can help.
Video: https://youtu.be/1tKDxkexzFE
Glassbeam: Ad-hoc Analytics on Internet of Complex Things with Apache Cassand...DataStax Academy
Learn how Apache Cassandra can be paired with Apache Spark to build a powerful OLAP solution for analyzing large scale operational data from Internet of Complex Things. Cassandra provides a scalable, flexible, and highly available data store. Spark provides a fast and scalable analytics framework. The two can be combined to build a powerful and flexible solution for analyzing IoT data. We will discuss what kind of analytics Cassandra can handle by itself and when Spark is required. We will also share some of the challenges and trade-offs that are important to know.
How jKool Analyzes Streaming Data in Real Time with DataStaxDataStax
In this webinar, Charles Rich, VP of Product Management at jKool will share their journey with DataStax; how jKool knew from the start that traditional relational databases wouldn’t work for the scalability and availability demands of time-series data, and why they turned to DataStax Enterprise for blazing performance and powerful enterprise search and analytics capabilities.
Building the Modern Data Hub: Beyond the Traditional Enterprise Data WarehouseFormant
Datavail and SlamData present on how to use NoSQL technologies (MongoDB and SlamData) to build a Data Hub -- the fast and easy way to real-time business insight.
Building a Next-gen Data Platform and Leveraging the OSS Ecosystem for Easy W...StampedeCon
This session will be a detailed recount of the design, implementation, and launch of the next-generation Shutterstock Data Platform, with strong emphasis on conveying clear, understandable learnings that can be transferred to your own organizations and projects. This platform was architected around the prevailing use of Kafka as a highly-scalable central data hub for shipping data across your organization in batch or streaming fashion. It also relies heavily on Avro as a serialization format and a global schema registry to provide structure that greatly improves quality and usability of our data sets, while also allowing the flexibility to evolve schemas and maintain backwards compatibility.
As a company, Shutterstock has always focused heavily on leveraging open source technologies in developing its products and infrastructure, and open source has been a driving force in big data more so than almost any other software sub-sector. With this plethora of constantly evolving data technologies, it can be a daunting task to select the right tool for your problem. We will discuss our approach for choosing specific existing technologies and when we made decisions to invest time in home-grown components and solutions.
We will cover advantages and the engineering process of developing language-agnostic APIs for publishing to and consuming from the data platform. These APIs can power some very interesting streaming analytics solutions that are easily accessible to teams across our engineering organization.
We will also discuss some of the massive advantages a global schema for your data provides for downstream ETL and data analytics. ETL into Hadoop and creation and maintenance of Hive databases and tables becomes much more reliable and easily automated with historically compatible schemas. To complement this schema-based approach, we will cover results of performance testing various file formats and compression schemes in Hadoop and Hive, the massive performance benefits you can gain in analytical workloads by leveraging highly optimized columnar file formats such as ORC and Parquet, and how you can use good old fashioned Hive as a tool for easily and efficiently converting exiting datasets into these formats.
Finally, we will cover lessons learned in launching this platform across our organization, future improvements and further design, and the need for data engineers to understand and speak the languages of data scientists and web, infrastructure, and network engineers.
Columbia Migrates from Legacy Data Warehouse to an Open Data Platform with De...Databricks
Columbia is a data-driven enterprise, integrating data from all line-of-business-systems to manage its wholesale and retail businesses. This includes integrating real-time and batch data to better manage purchase orders and generate accurate consumer demand forecasts.
Breakout: Hadoop and the Operational Data StoreCloudera, Inc.
As disparate data volumes continue to be operationalized across the enterprise, data will need to be processed, cleansed, transformed, and made available to end users at greater speeds. Traditional ODS systems run into issues when trying to process large data volumes causing operations to be backed up, data to be archived, and ETL/ ELT processes to fail. Join this breakout to learn how to battle these issues.
Can My Inventory Survive Eventual Consistency?DataStax
Let’s explore an inventory use case and discover how it’s possible to use an eventually consistent data store like Cassandra / DSE to support scalability, consistency, continuous availability. Combine that with analytics and I’ll show you how to build an inventory system for the future.
The Vision & Challenge of Applied Machine LearningCloudera, Inc.
Learn how Cloudera provides a unified platform that breaks down data silos commonly seen in organizations. By unifying the data needed for applied machine learning, organizations are better equipped to gather valuable insights from their data.
Introduction to Kudu - StampedeCon 2016StampedeCon
Over the past several years, the Hadoop ecosystem has made great strides in its real-time access capabilities, narrowing the gap compared to traditional database technologies. With systems such as Impala and Spark, analysts can now run complex queries or jobs over large datasets within a matter of seconds. With systems such as Apache HBase and Apache Phoenix, applications can achieve millisecond-scale random access to arbitrarily-sized datasets.
Despite these advances, some important gaps remain that prevent many applications from transitioning to Hadoop-based architectures. Users are often caught between a rock and a hard place: columnar formats such as Apache Parquet offer extremely fast scan rates for analytics, but little to no ability for real-time modification or row-by-row indexed access. Online systems such as HBase offer very fast random access, but scan rates that are too slow for large scale data warehousing workloads.
This talk will investigate the trade-offs between real-time transactional access and fast analytic performance from the perspective of storage engine internals. It will also describe Kudu, the new addition to the open source Hadoop ecosystem that fills the gap described above, complementing HDFS and HBase to provide a new option to achieve fast scans and fast random access from a single API.
Actionable Insights with AI - Snowflake for Data ScienceHarald Erb
Talk @ ScaleUp 360° AI Infrastructures DACH, 2021: Data scientists spend 80% and more of their time searching for and preparing data. This talk explains Snowflake’s Platform capabilities like near-unlimited data storage and instant and near-infinite compute resources and how the platform can be used to seamlessly integrate and support the machine learning libraries and tools data scientists rely on.
Pentaho - Jake Cornelius - Hadoop World 2010Cloudera, Inc.
Putting Analytics in Big Data Analytics
Jake Cornelius
Director of Product Management, Pentaho Corporation
Learn more @ http://www.cloudera.com/hadoop/
Webinar - The Agility Challenge - Powering Cloud Apps with Multi-Model & Mixe...DataStax
Building and managing cloud applications is not easy. Teams come face to face with these challenges: agility, manageability, performance, scalability, continuous availability and of course, security. Join us for “The Agility Challenge: Powering Cloud Applications with Multi-Model & Mixed Workloads” webinar where we will deep dive into challenges customers face with multiple data models such as graph, mixed workloads and how DataStax Enterprise can help.
Video: https://youtu.be/1tKDxkexzFE
Oracle Big Data Appliance and Big Data SQL for advanced analyticsjdijcks
Overview presentation showing Oracle Big Data Appliance and Oracle Big Data SQL in combination with why this really matters. Big Data SQL brings you the unique ability to analyze data across the entire spectrum of system, NoSQL, Hadoop and Oracle Database.
Oracle Systems Overview
Engineered systems strategy and overview about exadata, exalitics, superCluster, Exalogic, Oracle virtual appliance, ZFS appliance
Oracle hardware includes a full-suite of scalable engineered systems, servers, and storage that enable enterprises to optimize application and database performance, protect crucial data, and lower costs.
With Oracle, customers have freedom from the complexity of having multiple databases, analytics tools, and machine learning environments. Oracle's data management platform makes it easier and faster for application developers to create microservices-based applications with multiple data types.
DDN: Massively-Scalable Platforms and Solutions Engineered for the Big Data a...inside-BigData.com
In this talk from the DDN User Group at ISC’13, James Coomer from DataDirect Networks presents: Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era.
Watch the presentation here: http://insidehpc.com/2013/06/26/video-james-coomer-keynotes-ddn-user-group-at-isc13/
Oracle Database Appliance Portfolio overview. #ODA @OralceODA.
This deck will show the benefits of the ODA as your Engineered System best optimised to run the Oracle Database.
To learn more contact: daryll.whyte@oracle.com
(ODA Account Manager- UK Market)
Big data journey to the cloud 5.30.18 asher bartchCloudera, Inc.
We hope this session was valuable in teaching you more about Cloudera Enterprise on AWS, and how fast and easy it is to deploy a modern data management platform—in your cloud and on your terms.
Oracle Cloud ERP - where is My Data?
All about Oracle integration products and Cloud ERP:
* What are the ways to deliver it - all 3 options and obvious choice for our project
- File Based Data Import
- Web Services
* Can I trust the ERP statuses?
- Custom reporting using BI Publisher
- Security implications
* Lessons learned
- What works out of the box (provision SOA CS and, patch it)
- Security challenges
Tēmas : Trace File analyzer, live demo.
Valoda: Latviešu
Ar katru jaunu versiju Oracle ģenerē vairāk un vairāk diagnostiskās informācijas un bieži ir grūti sekot, kur tiek ierakstīta atbilstoša informācija. Vēlāk arī parādās jautājums, kā šo visu uzkopt, lai visa pieejamā vieta neaizietu nevajadzīgām lietām. Parādīšu un pastāstīšu par savu pieredzi ar TFA rīku trace/log failu pavaldīšanā, konfigurēšanā, kā arī par citām tā iespējām un zemūdens akmeņiem.
in LATVIAN language: Viens no galvenajiem datubāzes administratora uzdevumiem ir veikt datubāzes backup un prast no tā atjaunot datubāzi. Mysql bezmaksas versija nepiedāvā datubāzes administratoram ļoti daudz izvēles. Sava prezentācija es pastāstīšu par šādiem rīkiem:
-- MySQLdump
-- Percona XtraBackup
-- Mysql enterprise backup (MEB)
--Un citiem rīkiem kas palīdz man veikt db backup
Latvian Oracle User Group (LVOUG) ir neatkarīga organizācija, kas apvieno Oracle lietotājus, profesionāļus un citus interesantus Latvijā. Tās mērķis ir veicināt informācijas, zināšanu un pieredzes apmaiņu starp grupas dalībniekiem, informēt par Oracle produktu uzlabojumiem un jauninājumiem kā arī nodrošināt atgriezenisko saiti ar Oracle.
Par grupas dalībnieku var kļūt jebkurš interesents.
Middleware upgrade to Oracle Fusion Middleware(FMW) 12c.Real Case stories. Andrejs Vorobjovs
Tēmas apraksts: Middleware atjaunināšana līdz FMW 12c. Reālu projektu pieredze. Salīdzinoši nesen tika publicēta Oracle FMW 12c produktu līnija. Šoreiz gribu padalīties ar atjaunināšanas līdz Oracle FWM 12c pieredzi. Pamatu pamati, zemūdens akmeņi un tehniskie triki, kas var palīdzēt jums ietaupīt laiku un var būt arī saglabāt nervus.
Тема (РУ): Обновление Middleware до FMW 12c. Опыт реальных проектов.Описание: Относительно недавно вышла в свет линейка продуктов Oracle FMW 12c. В этот раз я хочу поделиться своим опытом обновления до Oracle FWM 12c.
Прописные истины, подводные камни и технические хитрости, которые помогут сберечь ваше время и, возможно и нервы.
Description(ENG): Relatively recently Orcale FMW 12c product line has been published.Today I would like to share my experience of middleware upgrade to Oracle FWM 12c.
Basics, pitfalls and technical tricks, that can save your time and nerves, may be.
Mysql ir populārākā atvērta koda datubāze un tajā ir vairāk nekā 400 parametri, bet nepieciešams uzstādīt /izmainīt tikai dažus no tiem, lai jūs nesaskartos ar problēmām jau pirmajā dienā. Šajā prezentācijā stāstīšu par parametriem, kuri ietekme datu drošību, datu atjaunošanu un datu konsistenci.
Izmantojiet iespēju piedalīties plašākajā IT nozares konferencē Baltijā Riga Dev Day 2016, kas jau otro gadu no 2. līdz 4. martam norisināsies Rīgā.
Ko iegūsiet?
Praktiski pielietojamu informāciju par IT nozares aktuālākajām un jaunākajām tēmām – mobilo ierīču aplikāciju izstrādi, Java/JVM, JavaScript jaununiem, Oracle datu bāzes risinājumiem un modernākajām tehnoloģijām.
OTN tour 2015 – это семинар с участием авторитетных международных спикеров, направленный на привлечение участников с целью обмена знаниями и опытом в области применения передовых технологий. Конференция прошла 27 ноября в конференц-зале ресторана Stargorod и стала первой из серии мероприятий такого рода.
How Oracle Certification helped to advance my career" - in that presentation I will talk about how Oracle Certification helped me to archive where I am now and give hints on how the best to use it.
-- We can do an expert panel with Alex on
How Social Media can help to advance your professional career.
And motivation from Jury Velikanov
"How Oracle Certification helped to advance my career" - in that presentation I will talk about how Oracle Certification helped me to archive where I am now and give hints on how the best to use it.
OTN tour 2015 benchmarking oracle io performance with Orion by Alex GorbachevAndrejs Vorobjovs
Every time Alex demonstrates charts he produces during IO benchmarks with ORION tool (Oracle I/O Numbers), he hears "Wow! How do you build these?" In this session Alex will teach how to benchmark your storage subsystem and capacity and how to stress test it to the limits. You will learn how easy it is to setup ORION benchmark and collect I/O performance characteristics of your platform and assess scalability of small random I/Os, impact of writes on I/Operformance, impact of different RAID levels, how backups can affect your OTLP traffic, performance of outer areas of disks vs inner areas, and compare SSD with HDD performance. ORION tests are very repeatable so it's a great measuring tool in your Measure, Analyze, Change, Measure cycle.
OTN tour 2015 Oracle Enterprise Manager 12c – Proof of ConceptAndrejs Vorobjovs
Why we are talking about this
How – minimal survival kit
Database provisioning:
Database provisioning
Pluggable database provisioning
Schema provisioning
Middleware provisioning:
New instance installation
Instance cloning
Integration provisioning
Restrictions
Conclusion
Q&A
Peteris Arajs
Technology Architecture Associate Manager at Accenture
More than 15 years experience in IT industry with main focus to:
- DB design, analysis, development and performance tuning
- Oracle eBusiness Suite
- Oracle Middleware
Also experienced in all stages of software development life cycle (SDLC) from business requirements and technical definitions to development, testing and production support.
Alex Nemirovskis
Technology Architecture Associate Manager at Accenture
More than 19 years experience in IT industry with main focus to:
- DB design, analysis, development and performance tuning
- DWH / ETL / BI / Analytics
- Oracle ADF
Also experienced in all stages of software development life cycle (SDLC) from business requirements and technical definitions to development, testing and production support.
This is an introduction to the modern cloud technology landscape and what it takes to migrate Oracle databases to the cloud and operate them there. The attendees will learn about cloud concepts and what are the various options of running databases in the cloud Infrastructure as a Service (IaaS) or Platform as a Service (PaaS).
OTN tour 2015 Experience in implementing SSL between oracle db and oracle cli...Andrejs Vorobjovs
Experience in implementing SSL between Oracle DB and Oracle Clients" - presentation will explain how to configure implement SSL between Oracle DB/Client
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Aleksejs Nemirovskis - Manage your data using oracle BDA
1. Copyright 2018 Accenture. All rights reserved.
Manage Your Data using
Oracle Big Data Appliance
Tips & Tricks
2. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
About me
Aleksejs Nemirovskis
Technology Architect
Over 23 years of experience in IT industry with main focus to
- DB architecture, design, analysis, development and performance tuning
- Data Integration / Data Warehousing / ETL / EL-T / BI / Data Analytics
- Oracle Technologies stack: DB, Data Integrations, Infrastructure, Cloud
- Big Data (Apache Hadoop, CDH, Cloudera Manager)
Experienced in all stages of software development life cycle (SDLC) from business
requirements and technical definitions to development, testing and production
support as well as in Agile software development techniques (Scrum, XP)
3. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Agenda
➢ Oracle Big Data Appliance architecture and components overview
➢ Oracle BDA System Overview
➢ Oracle BDA Hardware Overview
➢ Oracle BDA Software Overview
➢ Cloudera Distributed Hadoop (CDH)
➢ Advantages and additional value of an Oracle BDA
➢ End-to-end system
➢ Oracle XQuery 4 Hadoop
➢ Oracle DataSource for Apache Hadoop (OD4H)
➢ Challenges
➢ Oracle Big Data Appliance
➢ Cloudera
➢ Hadoop Eco-System
➢ Architecture & Implementation
➢ Customer case
➢ Infrastructure Setup
➢ Infrastructure Setup - PROD
➢ Data Processing
➢ Q&A
4. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Manage Your Data using Oracle BDA
➢Oracle BDA architecture and components overview
➢Advantages and additional value of an Oracle BDA
➢Challenges
➢Customer case
➢Q&A
5. Copyright 2018 Accenture. All rights reserved.
Oracle Big Data Appliance
➢ Cloudera Manager
▪ Cloudera Navigator
➢ Cloudera Distributed Hadoop
➢ Oracle Big Data Connectors
➢ Oracle Linux
➢ Oracle Server (hardware)
System Overview
6. Copyright 2018 Accenture. All rights reserved.
Oracle Big Data Appliance
Hardware OverviewStarter Rack - 6 x Compute / Storage Nodes
X7-2 Per Node
configuration
➢ 2 x 24-Core (2.1GHz) Intel
® Xeon ® 8160
➢ 8 x 32GB DDR4-2666 MHz
Memory, expandable to
1.5TB
➢ 12 x 10 TB 7,200 RPM High
Capacity SAS Drives
➢ 2 x 150GB M.2 SATA SSD
Drives
➢ 2 x QDR 40Gb/sec
InfiniBand Ports
➢ 1 x Dual-port InfiniBand
QDR CX3 (40 Gb/sec) PCIe
HCA
➢ 1 x Built-in RJ45 1 Gigabit
Ethernet port
Full Rack - 18 x Compute / Storage Nodes
7. Copyright 2018 Accenture. All rights reserved.
Oracle Big Data Appliance
Software Overview
Integrated Software
➢ Cloudera Enterprise 5 – Data Hub Edition with
support for:
▪ Cloudera’s Distribution including Apache
Hadoop (CDH)
▪ Cloudera Impala
▪ Cloudera Search
▪ Apache HBase
▪ Apache Spark
▪ Apache Kafka
▪ Cloudera Manager with support for:
o Cloudera Navigator
o Cloudera Back-up and Disaster
Recovery (BDR)
➢ Oracle Perfect Balance
➢ Oracle Table Access for Hadoop
Optional Software
➢ Oracle Big Data SQL
➢ Oracle Big Data Connectors:
▪ Oracle SQL Connector for Hadoop
▪ Oracle Loader for Hadoop
▪ Oracle XQuery for Hadoop
▪ Oracle R Advanced Analytics for Hadoop
▪ Oracle Data Integrator
➢ Oracle Audit Vault and Database Firewall for
Hadoop Auditing
➢ Oracle Data Integrator
➢ Oracle GoldenGate
➢ Oracle NoSQL Database Enterprise Edition
➢ Oracle Big Data Spatial and Graph
➢ Oracle Big Data Discovery
9. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Manage Your Data using Oracle BDA
➢Oracle BDA architecture and components overview
➢Advantages and additional value of an Oracle BDA
➢Challenges
➢Customer case
➢Q&A
10. Copyright 2018 Accenture. All rights reserved.
Oracle Big Data Appliance
End-to-End System`
Maintained&SupportedbyOracle
13. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Manage Your Data using Oracle BDA
➢Oracle BDA architecture and components overview
➢Advantages and additional value of an Oracle BDA
➢Challenges
➢Customer case
➢Q&A
14. Copyright 2018 Accenture. All rights reserved.
Challenges
Oracle Big Data Appliance
Cloudera release: gap up to 2 years
Hadoop HFDS: dedicated Name Node
vs. Name/Data Node
Support: Oracle vs. Cloudera vs.
Hadoop
15. Copyright 2018 Accenture. All rights reserved.
Challenges
Cloudera
HIVE vs. Impala
Limitations
(e.g. DBs/Objects to replicate)
HDFS <> Sentry synchronization
16. Copyright 2018 Accenture. All rights reserved.
Challenges
Hadoop Eco-System
Parallel writing into the Parquet files
Spark and Sentry – how to deal with
security, using Spark
17. Copyright 2018 Accenture. All rights reserved.
Challenges
Architecture & Implementation
Partitioning – partitioning key challenge
Compression – always needed?
Fast processing > small files on HDFS
(e.g. Resilient Distributed Datasets – Spark RDD)
18. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Manage Your Data using Oracle BDA
➢Oracle BDA architecture and components overview
➢Advantages and additional value of an Oracle BDA
➢Challenges
➢Customer case
➢Q&A
19. Copyright 2018 Accenture. All rights reserved.
Infrastructure Setup
Node01
Node02
Node03
Oracle BDA Informatica BDM
Customer Case
7 Clusters: BDA X6-2 Starter Rack (6 nodes)
➢ 264 CPU cores
➢ 1.5 TB RAM (+ 1.5 TB RAM per cluster is upcoming)
➢ 576 TB Storage
4 Domains: Informatica Big Data Management
➢ 10.2.1 in a full push-down mode
➢ 2 Domains: 3 nodes per domain
➢ 2 Domains: 3 + 3 (DR) nodes per domain
20. Copyright 2018 Accenture. All rights reserved.
Infrastructure Setup - PROD
Node01
Node02
Node03
Main Cluster Main Nodes
Customer Case
BDA X6-2 Starter Rack cluster (6 nodes + 10 nodes are upcoming)
➢ 264 CPU cores (+ 440 CPU cores are upcoming)
➢ 1.5 TB RAM (+ 6.3 TB RAM is upcoming)
➢ 576 TB Storage (+ 960 TB is upcoming)
Informatica Big Data Management
➢ 10.2.1 in a full push-down mode
21. Copyright 2018 Accenture. All rights reserved.
Customer Case
Data Processing
Sources Ingestion/Processing Stores Usage
Cloudera Data
Science Workbench
22. Copyright 2018 Accenture. All rights reserved.Copyright 2018 Accenture. All rights reserved.
Manage Your Data using Oracle BDA
Q & A