This document discusses enterprise artificial intelligence (AI) and Oracle's cloud AI platform. It begins by providing background on the AI revolution and increasing data generation. It then discusses Oracle's cloud AI platform and services for enterprise AI, including a data lake, data integration, analysis, and machine learning/deep learning tools. As an example, it outlines using the platform for product association analysis based on transaction log data from retail stores. The document emphasizes that Oracle's cloud AI platform provides tools and services suited for different types of data and analysis.
General Capabilities of GraalVM by Oleg Selajev @shelajevOracle Developers
Abstract: "General Capabilities of GraalVM"
GraalVM project enhances the Java ecosystem with an integrated, polyglot, high-performance execution environment for dynamic, static, and native languages. GraalVM supports Java, Scala, Kotlin, Groovy, and other JVM-based languages. At the same time, it can run the dynamic scripting languages JavaScript including node.js, Ruby, R, and Python.
In this session you'll see demos and learn what you can do with GraalVM, from using it as the JVM JIT compiler, enhancing the JIT, running native and polyglot programs, compiling them ahead of time for faster startup and lower runtime overhead, debugging your polyglot code using exact same tools for any language, to profiling performance and memory of your application and embedding GraalVM in a native application for portability.
GraalVM offers you the opportunity to write the code in the language you want, which suits the problem the best, and run the resulting program really fast wherever you like: JVM, native code, even inside a database.
Oracle Maximum Availability Architecture (MAA) Best Practices for the Cloud discusses MAA best practices for the Oracle Cloud mainly, explaining how MAA helps to improve availability in cloud environments as well as the Autonomous Database and explains, how to ensure application continuity using the Oracle Database in the cloud.
Running Kubernetes Workloads on Oracle Cloud InfrastructureOracle Developers
Kubernetes is an open-source container orchestration platform that allows you to deploy and manage containerized applications at scale. This workshop will provide an introduction to how Oracle Cloud Infrastructure (OCI) provides a developer friendly container-native and enterprise-ready managed Kubernetes. OCI enables rapid cluster creation and management while providing highly predictable performance supporting pure bare metal, VM, or hybrid Kubernetes clusters.
Oracle Kubernetes Engine (OKE) is a managed Kubernetes service on OCI that makes it easy to run and manage complex workloads (AI/ML) with minimal administration. The Kubeflow project is designed to simplify the deployment of machine learning projects like TensorFlow on Kubernetes. This workshop will demonstrate how simple it is to deploy the Kubeflow project on OKE.
General Capabilities of GraalVM by Oleg Selajev @shelajevOracle Developers
Abstract: "General Capabilities of GraalVM"
GraalVM project enhances the Java ecosystem with an integrated, polyglot, high-performance execution environment for dynamic, static, and native languages. GraalVM supports Java, Scala, Kotlin, Groovy, and other JVM-based languages. At the same time, it can run the dynamic scripting languages JavaScript including node.js, Ruby, R, and Python.
In this session you'll see demos and learn what you can do with GraalVM, from using it as the JVM JIT compiler, enhancing the JIT, running native and polyglot programs, compiling them ahead of time for faster startup and lower runtime overhead, debugging your polyglot code using exact same tools for any language, to profiling performance and memory of your application and embedding GraalVM in a native application for portability.
GraalVM offers you the opportunity to write the code in the language you want, which suits the problem the best, and run the resulting program really fast wherever you like: JVM, native code, even inside a database.
Oracle Maximum Availability Architecture (MAA) Best Practices for the Cloud discusses MAA best practices for the Oracle Cloud mainly, explaining how MAA helps to improve availability in cloud environments as well as the Autonomous Database and explains, how to ensure application continuity using the Oracle Database in the cloud.
Running Kubernetes Workloads on Oracle Cloud InfrastructureOracle Developers
Kubernetes is an open-source container orchestration platform that allows you to deploy and manage containerized applications at scale. This workshop will provide an introduction to how Oracle Cloud Infrastructure (OCI) provides a developer friendly container-native and enterprise-ready managed Kubernetes. OCI enables rapid cluster creation and management while providing highly predictable performance supporting pure bare metal, VM, or hybrid Kubernetes clusters.
Oracle Kubernetes Engine (OKE) is a managed Kubernetes service on OCI that makes it easy to run and manage complex workloads (AI/ML) with minimal administration. The Kubeflow project is designed to simplify the deployment of machine learning projects like TensorFlow on Kubernetes. This workshop will demonstrate how simple it is to deploy the Kubeflow project on OKE.
Managing Oracle Solaris Systems with Puppetglynnfoster
This presentation covers how to manage Oracle Solaris systems using Puppet. In this presentation we will cover the challenges facing the data center today, what Puppet is, and detail some of the work that was done to integrate Puppet with the core technology foundations included in the Oracle Solaris platform
Learn what Docker is, how MySQL fits in, and why it makes sense to use them together. You’ll then learn how to leverage Oracle’s official MySQL Docker containers to improve your own development operations.
You can find the demo walkthrough here:
https://gist.github.com/mattlord/3afe25b23175df7791c4723be4f19ad4
(From Oracle Open World 2017)
Oracle OpenWorld 2017 presentation on Oracle RAC 12c Rel. 2 & Cluster Architecture Internals. Presented by Anil Nair together with Dave Hickson, Database Architect, British Telecom (BT).
This presentation focuses on new Cluster Architectures introduced with Oracle RAC 12c Rel. 2 and how internal enhancements in Oracle RAC can help to facilitate them.
Building Cloud Native Applications with Oracle Autonomous Database.Oracle Developers
In this session, Manish Kapur from the Oracle Application Development Cloud Platform team will provide an overview of Oracle's Cloud-Native Application Development platform. He will talk about developing and deploying cloud-native applications like Microservices and Serverless functions using Continuous Integration and Delivery Pipelines. This will include a demonstration of how to use the CI/CD approach to build and deploy a simple Node.js based microservices application that uses Oracle Autonomous Transaction Processing (ATP) database for persistence.
MySQL Shell - The DevOps Tool for MySQLMiguel Araújo
Automation wasn’t always easy within MySQL-related operations, but now MySQL Shell can be used to make this integration better. For developers and DBAs, this session purpose was to show the power of the new MySQL Shell for development, operations, automation, orchestration, setup, maintenance, and management of InnoDB clusters.
Live demos not available in the slide deck. If you're interested in knowing more about those, feel free to contact me.
(From Oracle Open World 2017)
Disaster Recovery with MySQL InnoDB ClusterSet - What is it and how do I use it?Miguel Araújo
MySQL InnoDB ClusterSet brings multi-datacenter capabilities to our solutions and make it very easy to setup a disaster recovery architecture. Think multiple MySQL InnoDB Clusters into one single database architecture, fully managed from MySQL Shell and with full MySQL Router integration to make it easy to access the entire architecture.
This presentation covers:
- The various features of InnoDB Clusterset
- How to setup MySQL InnoDB ClusterSet
- Ways to migrate from an existing MySQL InnoDB Cluster into MySQL InnoDB ClusterSet
- How to deal with various failures
- The various features of router integration which makes connection to the database architecture easy.
Open source Apache Hadoop is a great framework for distributed processing of large data sets. But there’s a difference between “playing” with big data versus solving real problems. The reality is that Hadoop alone is not enough. In fact, almost every organization that plans to use Hadoop for production use quickly discovers that it lacks the required features for enterprise use. And, fewer still have the Hadoop specialists on hand to navigate through the complexity to build reliable, robust applications. As a result, many Hadoop projects never make it to production as executives say, “we just don’t have the skills.” In this session, we will discuss these enterprise capabilities and why they’re important: analytics, visualization, security, enterprise integration, developer/admin tools, and more. Additionally, we will share several real-world client examples who have found it necessary to use an enterprise-grade Hadoop platform to tackle some of the most interesting and challenging business problems.
Managing Oracle Solaris Systems with Puppetglynnfoster
This presentation covers how to manage Oracle Solaris systems using Puppet. In this presentation we will cover the challenges facing the data center today, what Puppet is, and detail some of the work that was done to integrate Puppet with the core technology foundations included in the Oracle Solaris platform
Learn what Docker is, how MySQL fits in, and why it makes sense to use them together. You’ll then learn how to leverage Oracle’s official MySQL Docker containers to improve your own development operations.
You can find the demo walkthrough here:
https://gist.github.com/mattlord/3afe25b23175df7791c4723be4f19ad4
(From Oracle Open World 2017)
Oracle OpenWorld 2017 presentation on Oracle RAC 12c Rel. 2 & Cluster Architecture Internals. Presented by Anil Nair together with Dave Hickson, Database Architect, British Telecom (BT).
This presentation focuses on new Cluster Architectures introduced with Oracle RAC 12c Rel. 2 and how internal enhancements in Oracle RAC can help to facilitate them.
Building Cloud Native Applications with Oracle Autonomous Database.Oracle Developers
In this session, Manish Kapur from the Oracle Application Development Cloud Platform team will provide an overview of Oracle's Cloud-Native Application Development platform. He will talk about developing and deploying cloud-native applications like Microservices and Serverless functions using Continuous Integration and Delivery Pipelines. This will include a demonstration of how to use the CI/CD approach to build and deploy a simple Node.js based microservices application that uses Oracle Autonomous Transaction Processing (ATP) database for persistence.
MySQL Shell - The DevOps Tool for MySQLMiguel Araújo
Automation wasn’t always easy within MySQL-related operations, but now MySQL Shell can be used to make this integration better. For developers and DBAs, this session purpose was to show the power of the new MySQL Shell for development, operations, automation, orchestration, setup, maintenance, and management of InnoDB clusters.
Live demos not available in the slide deck. If you're interested in knowing more about those, feel free to contact me.
(From Oracle Open World 2017)
Disaster Recovery with MySQL InnoDB ClusterSet - What is it and how do I use it?Miguel Araújo
MySQL InnoDB ClusterSet brings multi-datacenter capabilities to our solutions and make it very easy to setup a disaster recovery architecture. Think multiple MySQL InnoDB Clusters into one single database architecture, fully managed from MySQL Shell and with full MySQL Router integration to make it easy to access the entire architecture.
This presentation covers:
- The various features of InnoDB Clusterset
- How to setup MySQL InnoDB ClusterSet
- Ways to migrate from an existing MySQL InnoDB Cluster into MySQL InnoDB ClusterSet
- How to deal with various failures
- The various features of router integration which makes connection to the database architecture easy.
Open source Apache Hadoop is a great framework for distributed processing of large data sets. But there’s a difference between “playing” with big data versus solving real problems. The reality is that Hadoop alone is not enough. In fact, almost every organization that plans to use Hadoop for production use quickly discovers that it lacks the required features for enterprise use. And, fewer still have the Hadoop specialists on hand to navigate through the complexity to build reliable, robust applications. As a result, many Hadoop projects never make it to production as executives say, “we just don’t have the skills.” In this session, we will discuss these enterprise capabilities and why they’re important: analytics, visualization, security, enterprise integration, developer/admin tools, and more. Additionally, we will share several real-world client examples who have found it necessary to use an enterprise-grade Hadoop platform to tackle some of the most interesting and challenging business problems.
Artificial Intelligence nella realtà di oggi: come utilizzarla al meglioAmazon Web Services
L'intelligenza Artificiale è qui questa volta, per restare. Per le aziende, l'intelligenza artificiale si concretizza in soluzioni che migliorano l'esperienza dei clienti ottimizzando, automatizzando e personalizzando attività ad alto volume e riducendo al contempo costi e tempi, accelerando notevolmente il ritmo di innovazione. In questa sessione, approfondiremo i servizi AI di AWS che promuovo l'innovazione in azienda mantenendo la conformità con diversi regimi come HIPAA, PCI e altro. Infine, presenteremo le architetture AWS necessarie per supportare i carichi di lavoro di apprendimento automatico e deep learning.
AI/ML is a Means to Digital Transformation, Not an End ItselfBESPIN GLOBAL
엔터프라이즈의 인공지능(AI)과 머신러닝(ML) 적용은 왜 어려울까요?
베스핀글로벌의 웨비나 자료를 통해서 성공적인 디지털 트렌스포메이션 방법을 확인하세요.
[Agenda]
1. Adoption Challenges & Digital Transformation
2. Use Case 1: Product Quality Control
3. Use Case 2: Demand Forecasting
4. Summary & Next Step
Get to Know Your Customers - Build and Innovate with a Modern Data ArchitectureAmazon Web Services
Your customers probably want a better experience with your brand. Your different business teams want and need better insights in their decision making. Almost certainly, your finance and operations teams require this to happen at a fraction of the cost of traditional on-premises options. Modern data architectures on AWS help many of our best customers realise all of those goals. Your business data contains critical information about customer behaviours, operational decisions, and many factors that have financial impact on your organisation. Increasingly, this data sits beyond your transactional systems, and is too big, too fast, and too complex for existing systems to handle. AWS Data and Analytics services are designed from our customers' requirements to ingest, store, analyse, and consume information at record-breaking scale. In this session you will learn how these services work together to deliver business automation, enhance customer engagement and intelligence.
Industrial IoT Applications: Making the Connection and Extracting Value (IOT3...Amazon Web Services
Industrial IoT applications are rapidly emerging across industries such as oil and gas, manufacturing, and agriculture. In this chalk talk, we help you architect end-to-end solutions that will deliver value like predictive maintenance, manufacturing quality, and process monitoring. In this interactive session, we help you understand how to connect greenfield and brownfield infrastructure with AWS that leverages both AWS Greengrass (on premises) and other AWS Cloud services. Along the way, we show how the AWS Industrial IoT Reference Architecture is incorporated to build your industrial application.
Accelerate AI/ML Adoption with Intel Processors and C3IoT on AWS (AIM386-S) -...Amazon Web Services
Today, organizations deploy more AI/ML workloads on AWS than on any other cloud platform. The cloud has removed many of the challenges associated with scalability, and it’s never been easier or more cost effective to build custom and intelligent data models. In this session, learn how the C3 Platform leverages the full power of Intel Xeon Scalable processors on AWS to rapidly train, deploy, and operationalize AI/ML and big data applications like C3 Inventory Optimization and C3 Predictive Maintenance. In addition, a customer shares how these solutions helped achieve demonstrable value. This session is brought to you by AWS partner, Intel.
Real-World AI and Deep Learning for Enterprise with Case StudiesAmazon Web Services
Artificial Intelligence is here this time, to stay. For the Enterprise, AI materializes into solutions that improve customers' experiences by optimizing, automating, and personalizing high-volume tasks while lowering cost and time to market, therefore accelerating innovation. In this session, we cover AWS' AI products and services that enable innovation in the enterprise while maintaining compliance with different regimes such as HIPAA, PCI, and more. Finally, we discuss enterprise architectures on AWS for machine learning and deep learning workloads.
Transforming Enterprise IT - Transformation Day Montreal 2018Amazon Web Services
AWS Transformation Day is designed for enterprise organizations looking to make the move to the cloud in order to become more responsive, agile and innovative, while still staying secure and compliant.
R, Spark, Tensorflow, H20.ai Applied to Streaming AnalyticsKai Wähner
Slides from my talk at Codemotion Rome in March 2017. Development of analytic machine learning / deep learning models with R, Apache Spark ML, Tensorflow, H2O.ai, RapidMinder, KNIME and TIBCO Spotfire. Deployment to real time event processing / stream processing / streaming analytics engines like Apache Spark Streaming, Apache Flink, Kafka Streams, TIBCO StreamBase.
How to Leverage Machine Learning (R, Hadoop, Spark, H2O) for Real Time Proces...Codemotion
Big Data is key for innovation in many industries today. Large amounts of historical data are stored and analyzed in Hadoop, Spark or other clusters to find patterns, e.g. for predictive maintenance or cross-selling. However: How do you increase revenue or reduce risks in new transactions proactively? Stream processing is the solution to embed patterns into future actions in real-time. This session discusses and demos how machine learning and analytic models with R, Spark MLlib, H2O, etc. can be build and integrated into real-time event processing frameworks. The session focuses on live demos
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.