Big data, compliance and a highly skilled workforce are driving organizations to transform their current analytical infrastructure to deliver enterprise computing environments that can support the latest in data science and analytics practices. SAS remains a popular choice for statistical programming languages, but there is growing demand for R and Python. Data engineers are now being tasked to deliver scalable and highly available computing resources to support analytics for a growing number of users and increasing data volumes while maintaining security for their customers.
REST API debate: OData vs GraphQL vs ORDSSumit Sarkar
Learn the latest industry trends surrounding REST API standardization and what this means for your roadmap. OData is an OASIS standard REST API and has been established among tech companies such as Microsoft, SAP, CA, IBM and Salesforce. GraphQL was created by Facebook in 2015 and has already been deployed at tech companies such as Facebook, Shopify and Intuit. ORDS is the Oracle REST API and delivers similar standardization for Oracle-centric applications.
Building a Hybrid Data Pipeline for Salesforce and HadoopSumit Sarkar
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. The key to delivering on this was using standard interfaces using a bi-directional data pipeline to connect the systems. On the Salesforce side, we were able to get frictionless access to the data lake using clicks-not-code via OData. On the Hadoop side, we were able to ingest data from Salesforce using JDBC for Apache Sqoop. Join us to hear best practices and lessons learned.
Journey to Marketing Data Lake [BRK1098]Sumit Sarkar
The challenge this session’s speaker and his colleagues faced in trying to learn more about customer experiences was that insights are fragmented across different systems such as Oracle Eloqua, CRM, and web analytics. To better understand their contacts, they started with the corporate data warehouse, which was missing a lot of this lower-value and detailed data. When they considered expanding the data warehouse, it was difficult to define what questions they wanted to answer in advance, because it varies for each campaign they run. Thus they embarked on building a Hadoop-based data lake, for the flexibility to ask any questions with an ad hoc schema on read approach, against any customer data sets in varying levels of detail, to better understand what their visitors want to consume.
Breakout Session
Wednesday, Apr 26, 5:45 p.m. | Mandalay Bay D
https://oracle.rainfocus.com/scripts/catalog/oracleCx17.jsp?search=BRK1098
Data APIs Don't Discriminate [API World Stage Talk]Sumit Sarkar
The exploding API economy, combined with an advanced analytics market projected to reach $30 billion by 2019, is driving a market demand to expose more data from APIs. Business analysts, data engineers, and data scientists have been getting left behind in existing API strategies. This is because many APIs are designed to integrate with applications to extend functionality, however these data workers are looking for APIs that facilitate direct data access to support analytics. Data APIs are specifically designed to provide that frictionless data access experience to support analytics across standard interoperable interfaces such as OData (REST) or ODBC/JDBC (SQL). Consider expanding your API strategy to service the developers in this $30 billion market.
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark. Cloud APIs are commonly designed to support application integration representing a disconnect with the analytics ecosystem. These combined trends create significant demand for a “bring-your-own-analytics” (BYOA) capability for cloud applications. Your cloud may already be smart, but giving users frictionless access to your data will make everyone smarter.
The document discusses a new data pipeline called Progress DataDirect Hybrid Data Pipeline. It transforms how clouds access data by providing firewall-friendly and secure connectivity to on-premises and other cloud data sources. It acts as a single interface to various cloud APIs and exposes data sources as standard SQL and REST. This allows for expanded connectivity options and helps solve challenges around hybrid cloud integration and accessing data located in different environments or clouds.
Salesforce analytics and BI continues to be a trending, hot topic as organizations implement new platforms to improve their customer intelligence. But what’s the best way to access the data? SOQL is the popular query language for Salesforce. However, SQL may be better suited for accessing data for analytics. Join us in the great SOQL vs. SQL query debate to see which one is best for your analytics project.
REST API debate: OData vs GraphQL vs ORDSSumit Sarkar
Learn the latest industry trends surrounding REST API standardization and what this means for your roadmap. OData is an OASIS standard REST API and has been established among tech companies such as Microsoft, SAP, CA, IBM and Salesforce. GraphQL was created by Facebook in 2015 and has already been deployed at tech companies such as Facebook, Shopify and Intuit. ORDS is the Oracle REST API and delivers similar standardization for Oracle-centric applications.
Building a Hybrid Data Pipeline for Salesforce and HadoopSumit Sarkar
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. The key to delivering on this was using standard interfaces using a bi-directional data pipeline to connect the systems. On the Salesforce side, we were able to get frictionless access to the data lake using clicks-not-code via OData. On the Hadoop side, we were able to ingest data from Salesforce using JDBC for Apache Sqoop. Join us to hear best practices and lessons learned.
Journey to Marketing Data Lake [BRK1098]Sumit Sarkar
The challenge this session’s speaker and his colleagues faced in trying to learn more about customer experiences was that insights are fragmented across different systems such as Oracle Eloqua, CRM, and web analytics. To better understand their contacts, they started with the corporate data warehouse, which was missing a lot of this lower-value and detailed data. When they considered expanding the data warehouse, it was difficult to define what questions they wanted to answer in advance, because it varies for each campaign they run. Thus they embarked on building a Hadoop-based data lake, for the flexibility to ask any questions with an ad hoc schema on read approach, against any customer data sets in varying levels of detail, to better understand what their visitors want to consume.
Breakout Session
Wednesday, Apr 26, 5:45 p.m. | Mandalay Bay D
https://oracle.rainfocus.com/scripts/catalog/oracleCx17.jsp?search=BRK1098
Data APIs Don't Discriminate [API World Stage Talk]Sumit Sarkar
The exploding API economy, combined with an advanced analytics market projected to reach $30 billion by 2019, is driving a market demand to expose more data from APIs. Business analysts, data engineers, and data scientists have been getting left behind in existing API strategies. This is because many APIs are designed to integrate with applications to extend functionality, however these data workers are looking for APIs that facilitate direct data access to support analytics. Data APIs are specifically designed to provide that frictionless data access experience to support analytics across standard interoperable interfaces such as OData (REST) or ODBC/JDBC (SQL). Consider expanding your API strategy to service the developers in this $30 billion market.
Cloud applications are seeing a deluge of requests to support the exploding advanced analytics market. “Open analytics” is the emerging strategy to deliver that data through an open data access layer, in the cloud, to be directly consumed by external analytics tools and popular programming languages. An increasing number of data engineers and data scientists use a variety of platforms and advanced analytics languages such as SAS, R, Python and Java, as well as frameworks such as Hadoop and Spark. Cloud APIs are commonly designed to support application integration representing a disconnect with the analytics ecosystem. These combined trends create significant demand for a “bring-your-own-analytics” (BYOA) capability for cloud applications. Your cloud may already be smart, but giving users frictionless access to your data will make everyone smarter.
The document discusses a new data pipeline called Progress DataDirect Hybrid Data Pipeline. It transforms how clouds access data by providing firewall-friendly and secure connectivity to on-premises and other cloud data sources. It acts as a single interface to various cloud APIs and exposes data sources as standard SQL and REST. This allows for expanded connectivity options and helps solve challenges around hybrid cloud integration and accessing data located in different environments or clouds.
Salesforce analytics and BI continues to be a trending, hot topic as organizations implement new platforms to improve their customer intelligence. But what’s the best way to access the data? SOQL is the popular query language for Salesforce. However, SQL may be better suited for accessing data for analytics. Join us in the great SOQL vs. SQL query debate to see which one is best for your analytics project.
Firewall friendly pipeline for secure data accessSumit Sarkar
This webinar discusses how to establish secure connections between cloud applications and on-premises data behind firewalls. It presents common connection options like VPNs, SSH tunneling, and reverse proxies, and recommends a vendor-agnostic service that provides a managed open data connection. The webinar covers best practices for scalability, availability, and end-to-end monitoring. It provides examples of how BOARD and Intuit leverage such a connection service to access on-premises data from their cloud applications.
OData External Data Integration Strategies for SaaSSumit Sarkar
This document discusses OData integration strategies for SaaS applications. It provides an overview of the OData standard and why SaaS vendors are adopting it. It then describes how Oracle Service Cloud uses OData accelerators to integrate with external data sources like Salesforce and Siebel. These accelerators allow agents to access and edit external data without leaving the Service Cloud interface.
Presenter: Mike Johnson
The Big Data ecosystem is disrupting things for the good and not so good. Learn how we deal with this from a connectivity perspective to get insights about the ecosystem, including the latest commercial and open source projects we’re tracking.
This document discusses accessing NoSQL databases like MongoDB from SQL. It begins with an introduction to NoSQL and examples of JSON documents and key-value stores. It then covers the benefits of NoSQL like high performance, availability, and scalability. Common NoSQL implementations like MongoDB, Cassandra, and MarkLogic are described. The challenges of connecting to NoSQL databases from SQL are discussed. DataDirect connectors are presented as a solution for providing SQL access to NoSQL databases. They normalize the NoSQL data model and provide full ANSI SQL support. Performance and real-world case studies are also discussed.
Navigating Your Product's Growth with Embedded Analytics Progress
Presenter: Guarav Verma
Learn from real life applications for embedded product analytics from Telerik. In today’s data driven world, how can you leverage analytics to know your audience, improve their experience, focus on your loyal users to drive more revenue, and optimize your engineering effort to accelerate your business results? Know what the future of Telerik Analytics is like and be a part of it.
This document discusses Talend's integration capabilities with Cloudera's Distribution including Hadoop (CDH). It highlights Talend's ability to connect external data sources to Hadoop and HDFS, leverage MapReduce in Talend job design, and provides an overview of Talend's Hadoop integration features such as graphical flow design, connecting over 450 data sources to Hadoop, processing data inside Hadoop using HiveQL and Pig, and mass importing/exporting between Hadoop and relational databases.
HA, Scalability, DR & MAA in Oracle Database 21c - OverviewMarkus Michalewicz
Oracle Database 21c is Oracle's first Innovation Release and includes a lot of new and innovative HA, Scalability, DR & MAA features to provide the most scalable and reliable Oracle Database available today. This presentation discusses some of the database as well as infrastructure features contributing to this unprecedented level of resiliency.
How to Prepare Your Toolbox for the Future of SharePoint DevelopmentProgress
SharePoint is changing: instead of learning the Microsoft version of a technology that’s rapidly becoming outdated, developers can now use the latest and greatest in jQuery and Angular (or Knockout.js, React.js, etc.) and create great SharePoint UI.
The future of SharePoint development and customization is the SharePoint Framework (SPFx), a client-side based framework that allows JavaScript customizations to work on top of SharePoint Online/Office 365. Let’s put to work a toolset of web technologies, including Angular, Webpack and Kendo UI controls, to build a simple yet useful application and get started with the web stack today.
Download this whitepaper to:
* Get excited about the new SharePoint Framework (SPFx) and related web stack technologies
* See a great set of tools in action
* Learn how to build a practical SharePoint business application using modern web technology
This whitepaper is by SharePoint Gurus, an award-winning consultancy based in Sydney, Australia, that specializes in improving productivity through configuring and developing Microsoft SharePoint technologies.
Flexpod with SAP HANA and SAP ApplicationsLishantian
This document discusses Cisco and NetApp solutions for implementing SAP HANA, including:
1) The FlexPod approach which provides a simplified architecture for deploying SAP HANA appliances on Cisco UCS and NetApp storage up to 48TB.
2) Implementing SAP HANA using Tailored Data Center Integration (TDI) on FlexPod, which provides more flexibility compared to appliance-based deployments.
3) Two use cases for SAP HANA TDI involving running multiple SAP HANA production systems on a single Cisco UCS, and reusing an existing data center network rather than network components included in the solution.
This presentation discusses the top 5 reasons as well as various technology updates to provide a reasonable answer to the rather common question: "Why should one use an Oracle Database?". This "2020 "C-Edition" was first presented during the IOUG / Quest Forum Digital Event: Database & tech Week in June 2020 and subsequently updated based on feedback received.
How Universities Use Big Data to Transform EducationHortonworks
Student performance data is increasingly being captured as part of software-based and online classroom exercises and testing. This data can be augmented with behavioral data captured from sources such as social media, student-professor meeting notes, blogs, student surveys, and so forth to discover new insights to improve student learning. The results transcend traditional IT departments to focus on issues like retention, research, and the delivery of content and courses through new modalities.
Hortonworks is partnering with Microsoft to show you how the Hortonworks Data Platform (HDP) running on the Microsoft stack enables you to develop a “single view of a student”.
Salesforce External Objects for Big DataSumit Sarkar
Transform Salesforce into the system of engagement for your big data. Discuss best practices and lessons learned in accessing external data sets in Hadoop or Spark using Salesforce Connect. Leave the big data sets behind the firewall, and get on demand access for your users to big data insights using external objects with Salesforce Connect.
In this session we will cover:
Intro to Salesforce Connect
Intro to Big Data Landscape
How to connect Salesforce to Big Data using External Data Sources
Lessons Learned accessing Big Data using External Objects for native reporting, writes, lookups, search and more
Resources (How to learn more)
The document discusses Oracle's data integration products and big data solutions. It outlines five core capabilities of Oracle's data integration platform, including data availability, data movement, data transformation, data governance, and streaming data. It then describes eight core products that address real-time and streaming integration, ELT integration, data preparation, streaming analytics, dataflow ML, metadata management, data quality, and more. The document also outlines five cloud solutions for data integration including data migrations, data warehouse integration, development and test environments, high availability, and heterogeneous cloud. Finally, it discusses pragmatic big data solutions for data ingestion, transformations, governance, connectors, and streaming big data.
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
"Changing Role of the DBA" Skills to Have, to Obtain & to Nurture - Updated 2...Markus Michalewicz
The ever-changing IT industry requires DBA's to keep their skills up-to-date. This presentation discusses skills that any DBA should have, but also those that any DBA should obtain and nurture regardless of which new technology is entering the (Gartner) hype cycle. The first ever version of this deck was presented during Sangam18 under the title "(Oracle) DBA Skills to Have, to Obtain and to Nurture" and used in other occasions during 2019. It was subsequently enhanced to a more generic 2019 version, which included an outlook for 2020! This edition of the presentation maintains the generic character, but has been updated to reflect unprecedented changes in 2020 and to cover the latest Oracle technology, to provide a 3-year comparison as well as trends analysis.
Note that the link on slide 25 in the subtitle should have been: https://go.oracle.com/DBA
Pivotal Big Data Suite: A Technical OverviewVMware Tanzu
Pivotal provides a suite of big data products including Pivotal Greenplum Database, Pivotal HDB, and Pivotal GemFire. Greenplum Database is an open source massively parallel processing data warehouse. HDB is an open source analytical database for Apache Hadoop. GemFire is an open source application and transaction data grid. The suite provides a complete platform for big data with deployment options, advanced data services, and flexible licensing.
FlexPod Select for Hadoop is a pre-validated solution from Cisco and NetApp that provides an enterprise-class architecture for deploying Apache Hadoop workloads at scale. The solution includes Cisco UCS servers and fabric interconnects for compute, NetApp storage arrays, and Cloudera's Distribution of Apache Hadoop for the software stack. It offers benefits like high performance, reliability, scalability, simplified management, and reduced risk for organizations running business-critical Hadoop workloads.
Big Data Integration Webinar: Getting Started With Hadoop Big DataPentaho
This document discusses getting started with big data analytics using Hadoop and Pentaho. It provides an overview of installing and configuring Hadoop and Pentaho on a single machine or cluster. Dell's Crowbar tool is presented as a way to quickly deploy Hadoop clusters on Dell hardware in about two hours. The document also covers best practices like leveraging different technologies, starting with small datasets, and not overloading networks. A demo is given and contact information provided.
Horses for Courses: Database RoundtableEric Kavanagh
The blessing and curse of today's database market? So many choices! While relational databases still dominate the day-to-day business, a host of alternatives has evolved around very specific use cases: graph, document, NoSQL, hybrid (HTAP), column store, the list goes on. And the database tools market is teeming with activity as well. Register for this special Research Webcast to hear Dr. Robin Bloor share his early findings about the evolving database market. He'll be joined by Steve Sarsfield of HPE Vertica, and Robert Reeves of Datical in a roundtable discussion with Bloor Group CEO Eric Kavanagh. Send any questions to info@insideanalysis.com, or tweet with #DBSurvival.
Firewall friendly pipeline for secure data accessSumit Sarkar
This webinar discusses how to establish secure connections between cloud applications and on-premises data behind firewalls. It presents common connection options like VPNs, SSH tunneling, and reverse proxies, and recommends a vendor-agnostic service that provides a managed open data connection. The webinar covers best practices for scalability, availability, and end-to-end monitoring. It provides examples of how BOARD and Intuit leverage such a connection service to access on-premises data from their cloud applications.
OData External Data Integration Strategies for SaaSSumit Sarkar
This document discusses OData integration strategies for SaaS applications. It provides an overview of the OData standard and why SaaS vendors are adopting it. It then describes how Oracle Service Cloud uses OData accelerators to integrate with external data sources like Salesforce and Siebel. These accelerators allow agents to access and edit external data without leaving the Service Cloud interface.
Presenter: Mike Johnson
The Big Data ecosystem is disrupting things for the good and not so good. Learn how we deal with this from a connectivity perspective to get insights about the ecosystem, including the latest commercial and open source projects we’re tracking.
This document discusses accessing NoSQL databases like MongoDB from SQL. It begins with an introduction to NoSQL and examples of JSON documents and key-value stores. It then covers the benefits of NoSQL like high performance, availability, and scalability. Common NoSQL implementations like MongoDB, Cassandra, and MarkLogic are described. The challenges of connecting to NoSQL databases from SQL are discussed. DataDirect connectors are presented as a solution for providing SQL access to NoSQL databases. They normalize the NoSQL data model and provide full ANSI SQL support. Performance and real-world case studies are also discussed.
Navigating Your Product's Growth with Embedded Analytics Progress
Presenter: Guarav Verma
Learn from real life applications for embedded product analytics from Telerik. In today’s data driven world, how can you leverage analytics to know your audience, improve their experience, focus on your loyal users to drive more revenue, and optimize your engineering effort to accelerate your business results? Know what the future of Telerik Analytics is like and be a part of it.
This document discusses Talend's integration capabilities with Cloudera's Distribution including Hadoop (CDH). It highlights Talend's ability to connect external data sources to Hadoop and HDFS, leverage MapReduce in Talend job design, and provides an overview of Talend's Hadoop integration features such as graphical flow design, connecting over 450 data sources to Hadoop, processing data inside Hadoop using HiveQL and Pig, and mass importing/exporting between Hadoop and relational databases.
HA, Scalability, DR & MAA in Oracle Database 21c - OverviewMarkus Michalewicz
Oracle Database 21c is Oracle's first Innovation Release and includes a lot of new and innovative HA, Scalability, DR & MAA features to provide the most scalable and reliable Oracle Database available today. This presentation discusses some of the database as well as infrastructure features contributing to this unprecedented level of resiliency.
How to Prepare Your Toolbox for the Future of SharePoint DevelopmentProgress
SharePoint is changing: instead of learning the Microsoft version of a technology that’s rapidly becoming outdated, developers can now use the latest and greatest in jQuery and Angular (or Knockout.js, React.js, etc.) and create great SharePoint UI.
The future of SharePoint development and customization is the SharePoint Framework (SPFx), a client-side based framework that allows JavaScript customizations to work on top of SharePoint Online/Office 365. Let’s put to work a toolset of web technologies, including Angular, Webpack and Kendo UI controls, to build a simple yet useful application and get started with the web stack today.
Download this whitepaper to:
* Get excited about the new SharePoint Framework (SPFx) and related web stack technologies
* See a great set of tools in action
* Learn how to build a practical SharePoint business application using modern web technology
This whitepaper is by SharePoint Gurus, an award-winning consultancy based in Sydney, Australia, that specializes in improving productivity through configuring and developing Microsoft SharePoint technologies.
Flexpod with SAP HANA and SAP ApplicationsLishantian
This document discusses Cisco and NetApp solutions for implementing SAP HANA, including:
1) The FlexPod approach which provides a simplified architecture for deploying SAP HANA appliances on Cisco UCS and NetApp storage up to 48TB.
2) Implementing SAP HANA using Tailored Data Center Integration (TDI) on FlexPod, which provides more flexibility compared to appliance-based deployments.
3) Two use cases for SAP HANA TDI involving running multiple SAP HANA production systems on a single Cisco UCS, and reusing an existing data center network rather than network components included in the solution.
This presentation discusses the top 5 reasons as well as various technology updates to provide a reasonable answer to the rather common question: "Why should one use an Oracle Database?". This "2020 "C-Edition" was first presented during the IOUG / Quest Forum Digital Event: Database & tech Week in June 2020 and subsequently updated based on feedback received.
How Universities Use Big Data to Transform EducationHortonworks
Student performance data is increasingly being captured as part of software-based and online classroom exercises and testing. This data can be augmented with behavioral data captured from sources such as social media, student-professor meeting notes, blogs, student surveys, and so forth to discover new insights to improve student learning. The results transcend traditional IT departments to focus on issues like retention, research, and the delivery of content and courses through new modalities.
Hortonworks is partnering with Microsoft to show you how the Hortonworks Data Platform (HDP) running on the Microsoft stack enables you to develop a “single view of a student”.
Salesforce External Objects for Big DataSumit Sarkar
Transform Salesforce into the system of engagement for your big data. Discuss best practices and lessons learned in accessing external data sets in Hadoop or Spark using Salesforce Connect. Leave the big data sets behind the firewall, and get on demand access for your users to big data insights using external objects with Salesforce Connect.
In this session we will cover:
Intro to Salesforce Connect
Intro to Big Data Landscape
How to connect Salesforce to Big Data using External Data Sources
Lessons Learned accessing Big Data using External Objects for native reporting, writes, lookups, search and more
Resources (How to learn more)
The document discusses Oracle's data integration products and big data solutions. It outlines five core capabilities of Oracle's data integration platform, including data availability, data movement, data transformation, data governance, and streaming data. It then describes eight core products that address real-time and streaming integration, ELT integration, data preparation, streaming analytics, dataflow ML, metadata management, data quality, and more. The document also outlines five cloud solutions for data integration including data migrations, data warehouse integration, development and test environments, high availability, and heterogeneous cloud. Finally, it discusses pragmatic big data solutions for data ingestion, transformations, governance, connectors, and streaming big data.
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
"Changing Role of the DBA" Skills to Have, to Obtain & to Nurture - Updated 2...Markus Michalewicz
The ever-changing IT industry requires DBA's to keep their skills up-to-date. This presentation discusses skills that any DBA should have, but also those that any DBA should obtain and nurture regardless of which new technology is entering the (Gartner) hype cycle. The first ever version of this deck was presented during Sangam18 under the title "(Oracle) DBA Skills to Have, to Obtain and to Nurture" and used in other occasions during 2019. It was subsequently enhanced to a more generic 2019 version, which included an outlook for 2020! This edition of the presentation maintains the generic character, but has been updated to reflect unprecedented changes in 2020 and to cover the latest Oracle technology, to provide a 3-year comparison as well as trends analysis.
Note that the link on slide 25 in the subtitle should have been: https://go.oracle.com/DBA
Pivotal Big Data Suite: A Technical OverviewVMware Tanzu
Pivotal provides a suite of big data products including Pivotal Greenplum Database, Pivotal HDB, and Pivotal GemFire. Greenplum Database is an open source massively parallel processing data warehouse. HDB is an open source analytical database for Apache Hadoop. GemFire is an open source application and transaction data grid. The suite provides a complete platform for big data with deployment options, advanced data services, and flexible licensing.
FlexPod Select for Hadoop is a pre-validated solution from Cisco and NetApp that provides an enterprise-class architecture for deploying Apache Hadoop workloads at scale. The solution includes Cisco UCS servers and fabric interconnects for compute, NetApp storage arrays, and Cloudera's Distribution of Apache Hadoop for the software stack. It offers benefits like high performance, reliability, scalability, simplified management, and reduced risk for organizations running business-critical Hadoop workloads.
Big Data Integration Webinar: Getting Started With Hadoop Big DataPentaho
This document discusses getting started with big data analytics using Hadoop and Pentaho. It provides an overview of installing and configuring Hadoop and Pentaho on a single machine or cluster. Dell's Crowbar tool is presented as a way to quickly deploy Hadoop clusters on Dell hardware in about two hours. The document also covers best practices like leveraging different technologies, starting with small datasets, and not overloading networks. A demo is given and contact information provided.
Horses for Courses: Database RoundtableEric Kavanagh
The blessing and curse of today's database market? So many choices! While relational databases still dominate the day-to-day business, a host of alternatives has evolved around very specific use cases: graph, document, NoSQL, hybrid (HTAP), column store, the list goes on. And the database tools market is teeming with activity as well. Register for this special Research Webcast to hear Dr. Robin Bloor share his early findings about the evolving database market. He'll be joined by Steve Sarsfield of HPE Vertica, and Robert Reeves of Datical in a roundtable discussion with Bloor Group CEO Eric Kavanagh. Send any questions to info@insideanalysis.com, or tweet with #DBSurvival.
Webinar: DataStax Enterprise 5.0 What’s New and How It’ll Make Your Life EasierDataStax
Want help building applications with real-time value at epic scale? How about solving your database performance and availability issues? Then, you want to hear more about DataStax Enterprise 5.0. Join this webinar to learn what’s new in DSE 5.0 ‒ the largest software release to date at DataStax. DSE 5.0 introduces multi-model support including Graph and JSON data models along with a ton of new and enhanced enterprise database capabilities.
View webinar recording here: https://youtu.be/3pfm4ntASJ0
This document outlines Ananth Bala's presentation on bridging data silos for business insights. The agenda includes discussing data silos, demonstrating how to derive intelligence from multiple data sources, interactive multi-device reporting, and an overview of the hybrid data pipeline. The presentation notes that data silos have grown due to different systems of record, rise of SaaS solutions, limited integration, and disparate teams. It introduces Progress Data Direct as a way to provide a single interface and standards-based connectivity to bridge these silos. Demos are shown using Data Direct to aggregate data from multiple sources and create reports accessible across devices.
Developing Enterprise Consciousness: Building Modern Open Data PlatformsScyllaDB
ScyllaDB, along side some of the other major distributed real-time technologies gives businesses a unique opportunity to achieve enterprise consciousness - a business platform that delivers data to the people that need when they need it any time, anywhere.
This talk covers how modern tools in the open data platform can help companies synchronize data across their applications using open source tools and technologies and more modern low-code ETL/ReverseETL tools.
Topics:
- Business Platform Challenges
- What Enterprise Consciousness Solves
- How ScyllaDB Empowers Enterprise Consciousness
- What can ScyllaDB do for Big Companies
- What can ScyllaDB do for smaller companies.
Oracle Openworld Presentation with Paul Kent (SAS) on Big Data Appliance and ...jdijcks
Learn about the benefits of Oracle Big Data Appliance and how it can drive business value underneath applications and tools. This includes a section by Paul Kent, VP Big Data SAS describing how SAS runs well on Oracle Engineered Systems and on Oracle Big Data Appliance specifically.
Modern data management using Kappa and streaming architectures, including discussion by EBay's Connie Yang about the Rheos platform and the use of Oracle GoldenGate, Kafka, Flink, etc.
Insights into Real World Data Management ChallengesDataWorks Summit
Data is your most valuable business asset and it's also your biggest challenge. This challenge and opportunity means we continually face significant road blocks toward becoming a data driven organisation. From the management of data, to the bubbling open source frameworks, the limited industry skills to surmounting time and cost pressures, our challenge in data is big.
We all want and need a “fit for purpose” approach to management of data, especially Big Data, and overcoming the ongoing challenges around the ‘3Vs’ means we get to focus on the most important V - ‘Value’.Come along and join the discussion on how Oracle Big Data Cloud provides Value in the management of data and supports your move toward becoming a data driven organisation.
Speaker
Noble Raveendran, Principal Consultant, Oracle
Oracle Big Data Appliance and Big Data SQL for advanced analyticsjdijcks
Overview presentation showing Oracle Big Data Appliance and Oracle Big Data SQL in combination with why this really matters. Big Data SQL brings you the unique ability to analyze data across the entire spectrum of system, NoSQL, Hadoop and Oracle Database.
The document provides details about the candidate's experience and skills in big data technologies like Hadoop, Hive, Pig, Spark, Sqoop, Flume, and HBase. The candidate has over 1.5 years of experience learning and working with these technologies. He has installed and configured Hadoop clusters from different versions and used distributions from MapR. He has in-depth knowledge of Hadoop architecture and frameworks and has performed various tasks in a Hadoop environment including configuration of Hive, writing Pig scripts, using Sqoop and Flume, and writing Spark programs.
25 plus years of seasoned data professional in building, managing practices, Global Delivery in Big Data Analytics, Big Data Migration from On-premise to GCP and Azure, EDW & BI, Business analytics, SAP HANA, Predictive Analytics, Data QA, Automation of solutions, Big Data Framework & Methodologies, and Data Products Development
Whither the Hadoop Developer Experience, June Hadoop Meetup, Nitin MotgiFelicia Haggarty
The document discusses challenges with building operational data applications on Hadoop and introduces the Cask Data Application Platform (CDAP) as a solution. It provides an agenda that covers data applications, challenges, CDAP motivation and goals, use cases, and an introduction and architecture overview of CDAP. The document aims to demonstrate how CDAP provides a unified platform that simplifies application development and lifecycle while supporting reusable data and processing patterns.
Presenter: Sumit Sarkar
The CMO will overtake the CIO on technology spend by 2017. We’re entering a new era of IT and sales/marketing collaboration. Learn about the latest methods for accessing data for deeper analytics from sales and marketing cloud applications across Eloqua, Marketo, Google Analytics, Salesforce and more.
What it takes to bring Hadoop to a production-ready stateClouderaUserGroups
While Hadoop may be a hot topic and is probably the buzziest big data term, the fact is that many Hadoop projects get stuck in pilot mode. We hear a number of reasons for this.
• “It’s too complicated.”
• “I don’t have the right resources.”
• “Security and compliance are never going to approve this.”
This session digs deep into why certain projects seem destined to remain in development. We’ll also cover what it takes to bring Hadoop to a production-ready state and convince management that it’s time to start using Hadoop to store and analyze real business data.
Government and Education Webinar: Improving Application PerformanceSolarWinds
Learn about SolarWinds® systems management tools to monitor infrastructure and help improve application performance for your organization. SolarWinds systems management tools support on-premises, cloud-based, and hybrid applications.
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...DataStax
The document discusses Surescripts' implementation of DataStax Enterprise (DSE) to power its cloud applications. Surescripts processes millions of healthcare messages daily and needed a scalable data platform. It transitioned from Oracle to DSE for its persistent tier to gain horizontal scalability and high availability. Initial results have shown improved performance and efficiency over Oracle. Surescripts aims to create logical data centers spanning physical sites using DSE replication to enhance reliability.
During this webinar, we will review best practices and lessons learned from working with large and mid-size companies on their deployment of PostgreSQL. We will explore the practices that helped industry leaders move through these stages quickly, and get as much value out of PostgreSQL as possible without incurring undue risk.
We have identified a set of levers that companies can use to accelerate their success with PostgreSQL:
- Application Tiering
- Collaboration between DBAs and Development Teams
- Evangelizing
- Standardization and Automation
- Balance of Migration and New Development
The challenge of computing big data for evolving digital business processes demands variety of computation techniques and engines (SQL, OLAP, time-series, graph, document store), but working in unified framework. A simple architecture of data transformations while ensuring the security, governance, and operational administration are the necessary critical components for enterprise production environments supporting day-to-day business processes. In this session, you will learn about best practices & critical components to ensure business value from latest production deployments. Hear how existing customers are using SAP Vora and the value they have achieved so far with this in-memory engine for distributed data processing. The session provides you with a clear understanding how SAP Vora and open source components like Apache Hadoop and Apache Spark offer an architecture that supports a wide variety of use cases and industries. You will also receive very useful insight where to find development resources, test drive demos, and general documentation.
Similar to Journey to SAS Analytics Grid with SAS, R, Python (20)
What serverless means for enterprise appsSumit Sarkar
There’s a new approach to app development ripe with misconceptions and more buzzwords to translate to business sponsors. Industry analysts call it serverless, but it’s also known as backend as a service (BaaS), function as a service (FaaS), cloud-native architectures, or microservices—just to name a few. Whatever you call it, this approach is giving developers new freedom to focus on frontend functionality and deliver better, more innovative user experiences and ultimately establish value faster. Let’s discuss the pros and cons of serverless in enterprise architectures.
Digitize Enterprise Assets for MobilitySumit Sarkar
Demand for digital experiences such as mobility are putting pressure on enterprise teams and systems. Many of these systems are deployed on servers and not engineered to scale. Mobility projects across web/mobile, voice, chat and AR are increasingly running on serverless cloud native architectures. But how can organizations meet the customer demands for digital experiences on enterprise systems such as ERP systems or enterprise APIs? Join Progress Kinvey to explore four options to digitize enterprise systems to deliver experiences for the connected world.
Salesforce shops, including ourselves, have been eagerly anticipating external object support with reports. Starting in Winter ’17, you can build native reports with on-demand access to external data sources such as Oracle, SQL Server, Greenplum, Amazon Redshift, IBM DB2 or Hadoop Big Data Platforms. External objects are powered by Salesforce Connect and provide clicks-not-code data access for admins, devs and general users. But is all of this too good to be true?
During this webinar, you’ll learn:
- Introduce External Objects and their new capabilities for Reporting and Wave trending in Winter ‘17
- How to setup Salesforce report with external data sources
- How to produce OData from warehouses, marts, lakes or other reporting systems.
Report considerations and limitations with Salesforce Connect
This document summarizes a 5 day proof of concept (POC) for integrating invoice data from an on-premise system into Salesforce using Lightning Connect. On day 1, the author requested connection information and learned invoices were stored in both ERP and a data warehouse. On day 2, they planned the data model relationship and learned to consult data experts. On day 3, they set up developer and trial accounts to produce OData. On day 4, they encountered an issue building a related list and got help from an online community. On day 5, Lightning Connect was enabled and they migrated the POC to a new sandbox for testing. Future projects were discussed to integrate additional systems using Lightning Connect.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Journey to SAS Analytics Grid with SAS, R, Python
1. Journey to SAS
Analytics Grid with SAS,
R, Python
Benjamin Zenick, Chief Operating Officer -
Zencos
Sumit Sarkar, Chief Data Evangelist -
Progress DataDirect
3. Journey to SAS
Analytics Grid with SAS,
R, Python
Benjamin Zenick, Chief Operating Officer -
Zencos
Sumit Sarkar, Chief Data Evangelist -
Progress DataDirect
7. The Evolution of Analytics
Businesses started with large and expensive central mainframes
– Mainframes were limited by early storage and processing technology
– Connectivity and user interfaces to data were limited by “dumb” terminals
– Expansion was limited by proprietary chassis design
– Connecting multiple mainframes was expensive, challenging, or impossible
8. Analytics Today
• Modernization moved away from Mainframes
• Moved toward server / client solutions, workstations, storage
appliances, and networking
• Shortcoming of centralized datacenters: Administrative and
Performance Bottlenecks
12. Signs your organization is ready to consider an HPC or Grid
solution…
• Decrease in cost benefits
• Current model doesn’t scale well
• Massively Parallelized Processing
• Administrative needs continue to grow and grow
• High(er) Availability is possible
• Faster (Disaster) Recovery
Zencos capabilities prepared for TEST Co.
15. Best Practices
• Preparation
• Technologies
• Plan
• Time
• Expectations
• Team
• Transition
• Users
• Support
• Goal Alignment
16. Lessons Learned
• Invest in a meaningful assessment
• Plan to purchase and build Test and Disaster Recovery
environments
• Understand the applications and use cases
• Outline support model for legacy projects
• Consider your post-implementation needs
• Expect the unexpected
Can Your Current Infrastructure Support High-Performance Analytics and Data Science?
Big data, compliance and a highly skilled workforce are driving organizations to transform their current analytical infrastructure to deliver enterprise computing environments that can support the latest in data science and analytics practices. SAS remains a popular choice for statistical programming languages, but there is growing demand for R and Python. Data engineers are now being tasked to deliver scalable and highly available computing resources to support analytics for a growing number of users and increasing data volumes while maintaining security for their customers.
Join this webinar to learn:
Differences between traditional and Grid deployments for SAS
Best practices and lessons learned in deploying an Analytics Grid
How to deliver an open analytics strategy for SAS, R, Python and others
Popular data sources for advanced analytics
Join Audio: 2 ways to do so, 1) to use VoIP, click on “Mic & Speakers”, or 2) to use your telephone, click on “telephone” and dial-in using the numbers and information provided
2) All lines are muted for today’s webinar. We do plan to have a live Q&A session at the end of the presentations. However if you have a question at any time during this webinar, simply submit your questions via the “Question” section of the webinar interface located to the right of your screen – we will collect all questions through this “Question Window”.
Final Note: we are recording today’s webinar