Rich media is exploding all around us. From our personal usage to retailers monitoring store traffic for optimized associate placement, there is wide and growing application of rich media. Despite the pervasive usage, enterprises have had limited choice of generally available tools to analyze rich media. In this session we will look into leveraging deep learning algorithms for rich media analysis and provide practical hands on example of image recognition using Apache Hadoop and Spark.
Supporting Financial Services with a More Flexible Approach to Big DataHortonworks
Financial services companies can reap tremendous benefits from 'Big Data' and they have moved quickly to deploy it. But these companies also place heavy demands on 'Big Data' infrastructure for flexibility, reliability and performance. In this webinar, Hortonworks joins WANDisco to look at three examples of using 'Big Data' to get a more comprehensive view of customer behavior and activity in the banking and insurance industries. Then we'll pull out the common threads from these examples, and see how a flexible next-generation Hadoop architecture lets you get a step up on improving your business performance. Join us to learn:
How to leverage data from across an entire global enterprise
How to analyze a wide variety of structured and unstructured data to get quick, meaningful answers to critical questions
What industry leaders have put in place
A Comprehensive Approach to Building your Big Data - with Cisco, Hortonworks ...Hortonworks
Companies in every industry look for ways to explore new data types and large data sets that were previously too big to capture, store and process. They need to unlock insights from data such as clickstream, geo-location, sensor, server log, social, text and video data. However, becoming a data-first enterprise comes with many challenges.
Join this webinar organized by three leaders in their respective fields and learn from our experts how you can accelerate the implementation of a scalable, cost-efficient and robust Big Data solution. Cisco, Hortonworks and Red Hat will explore how new data sets can enrich existing analytic applications with new perspectives and insights and how they can help you drive the creation of innovative new apps that provide new value to your business.
In 2012, we released Hortonworks Data Platform powered by Apache Hadoop and established partnerships with major enterprise software vendors including Microsoft and Teradata that are making enterprise ready Hadoop easier and faster to consume. As we start 2013, we invite you to join us for this live webinar where Shaun Connolly, VP of Strategy at Hortonworks, will cover the highlights of 2012 and the road ahead in 2013 for Hortonworks and Apache Hadoop.
2015 02 12 talend hortonworks webinar challenges to hadoop adoptionHortonworks
Hadoop is no longer optional. Companies of all sizes are in various phases of their own Big Data journey. Whether you are just starting to explore the platform or have multiple clusters up and running, everyone is presented with a similar challenge - developing their internal skillset. Hadoop specialists are hard to find. Hand coding is too prone to error when it comes to storing, integrating or analyzing your data. However, it doesn’t need to be this difficult.
In this recorded webinar, Talend and Hortonworks help you learn how to unify all your data in Hadoop, with no specialized Big Data skills.
Find the recording here. www.talend.com/resources/webinars/challenges-to-hadoop-adoption-if-you-can-dream-it-you-can-build-it
This webinar covers: How Hadoop opens a new world of analytic applications, How to bridge the skills gap with our Big Data solutions, Experience a real-world, simple technical demo
Discover HDP 2.1: Using Apache Ambari to Manage Hadoop Clusters Hortonworks
Apache Ambari is a single framework for IT administrators to provision, manage and monitor a Hadoop cluster. Hortonworks Data Platform 2.1. includes Apache Ambari 1.6.1.
This presentation gives an overview of Apache Ambari and discusses recent Ambari innovations for operating at scale and extensibility, including:
+ Maintenance mode
+ Rolling restarts
+ Ambari Blueprints
+ Ambari Stacks
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
Supporting Financial Services with a More Flexible Approach to Big DataHortonworks
Financial services companies can reap tremendous benefits from 'Big Data' and they have moved quickly to deploy it. But these companies also place heavy demands on 'Big Data' infrastructure for flexibility, reliability and performance. In this webinar, Hortonworks joins WANDisco to look at three examples of using 'Big Data' to get a more comprehensive view of customer behavior and activity in the banking and insurance industries. Then we'll pull out the common threads from these examples, and see how a flexible next-generation Hadoop architecture lets you get a step up on improving your business performance. Join us to learn:
How to leverage data from across an entire global enterprise
How to analyze a wide variety of structured and unstructured data to get quick, meaningful answers to critical questions
What industry leaders have put in place
A Comprehensive Approach to Building your Big Data - with Cisco, Hortonworks ...Hortonworks
Companies in every industry look for ways to explore new data types and large data sets that were previously too big to capture, store and process. They need to unlock insights from data such as clickstream, geo-location, sensor, server log, social, text and video data. However, becoming a data-first enterprise comes with many challenges.
Join this webinar organized by three leaders in their respective fields and learn from our experts how you can accelerate the implementation of a scalable, cost-efficient and robust Big Data solution. Cisco, Hortonworks and Red Hat will explore how new data sets can enrich existing analytic applications with new perspectives and insights and how they can help you drive the creation of innovative new apps that provide new value to your business.
In 2012, we released Hortonworks Data Platform powered by Apache Hadoop and established partnerships with major enterprise software vendors including Microsoft and Teradata that are making enterprise ready Hadoop easier and faster to consume. As we start 2013, we invite you to join us for this live webinar where Shaun Connolly, VP of Strategy at Hortonworks, will cover the highlights of 2012 and the road ahead in 2013 for Hortonworks and Apache Hadoop.
2015 02 12 talend hortonworks webinar challenges to hadoop adoptionHortonworks
Hadoop is no longer optional. Companies of all sizes are in various phases of their own Big Data journey. Whether you are just starting to explore the platform or have multiple clusters up and running, everyone is presented with a similar challenge - developing their internal skillset. Hadoop specialists are hard to find. Hand coding is too prone to error when it comes to storing, integrating or analyzing your data. However, it doesn’t need to be this difficult.
In this recorded webinar, Talend and Hortonworks help you learn how to unify all your data in Hadoop, with no specialized Big Data skills.
Find the recording here. www.talend.com/resources/webinars/challenges-to-hadoop-adoption-if-you-can-dream-it-you-can-build-it
This webinar covers: How Hadoop opens a new world of analytic applications, How to bridge the skills gap with our Big Data solutions, Experience a real-world, simple technical demo
Discover HDP 2.1: Using Apache Ambari to Manage Hadoop Clusters Hortonworks
Apache Ambari is a single framework for IT administrators to provision, manage and monitor a Hadoop cluster. Hortonworks Data Platform 2.1. includes Apache Ambari 1.6.1.
This presentation gives an overview of Apache Ambari and discusses recent Ambari innovations for operating at scale and extensibility, including:
+ Maintenance mode
+ Rolling restarts
+ Ambari Blueprints
+ Ambari Stacks
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
Enterprise Hadoop with Hortonworks and Nimble StorageHortonworks
Join us to learn how Hortonworks Data Platform and Nimble Storage provide an enterprise-ready data platform for multi-workload data processing. HDP supports an array of processing methods — from batch through interactive to real-time, with key capabilities required of an enterprise data platform — spanning Governance, Security and Operations. Nimble Storage provides the performance, capacity, and availability for HDP and allows you to take advantage of Hadoop with minimal changes to existing data architectures and skillsets.
Hortonworks - What's Possible with a Modern Data Architecture?Hortonworks
This is Mark Ledbetter's presentation from the September 22, 2014 Hortonworks webinar “What’s Possible with a Modern Data Architecture?” Mark is vice president for industry solutions at Hortonworks. He has more than twenty-five years experience in the software industry with a focus on Retail and supply chain.
C-BAG Big Data Meetup Chennai Oct.29-2014 Hortonworks and Concurrent on Casca...Hortonworks
Big Data is moving to the next level of maturity and it’s all about the applications. Dhruv Kumar, one of the minds behind Cascading, the most widely used and deployed development framework for building Big Data applications, will discuss how Cascading can enable developers to accelerate the time to market for their data applications, from development to production. In this session, Dhruv will introduce how to easily and reliably develop, test, and scale your data applications and then deploy them on Hadoop and Hortonworks Data Platform. He will show a demo using the Hortonworks Sandbox and Cascading. Recording is at
https://hortonworks.webex.com/hortonworks/lsr.php?RCID=e5582bcbc0516d35fc2dcf0bce86146e
Predictive Analytics and Machine Learning…with SAS and Apache HadoopHortonworks
In this interactive webinar, we'll walk through use cases on how you can use advanced analytics like SAS Visual Statistics and In-Memory Statistic with Hortonworks’ data platform (HDP) to reveal insights in your big data and redefine how your organization solves complex problems.
Hortonworks Data In Motion Series Part 4Hortonworks
How real-world enterprises leverage Hortonworks DataFlow/Apache NiFi to to create real-time data flows in record time to enable new business opportunities, improve customer retention, accelerate big data projects from months to minutes through increased efficiency and reduced costs.
On-Demand webinar: http://hortonworks.com/webinar/paradigm-shift-business-usual-real-time-dataflows-record-time/
Predicting Customer Experience through Hadoop and Customer Behavior GraphsHortonworks
Enhancing a customer experience has become essential for communication service providers to effectively manage customer churn and build a strong, long lasting relationship with their customers. This has become increasingly challenging as customer interactions occur across multiple channels. Understanding customer behavior and how it applies across channels is the key to ensuring the best level of experience is achieved by each customer.
In this webinar Hortonworks and Apigee discuss how service providers can capture and visualize customer behavior across customer interaction points like call center events (IVR and chat) and combine it with network data, to predict customer calls and patterns of digital channel abandonment using Hadoop and predictive analysis and visualization tools..
We will identify ways to develop a 360 degree view across a customer’s household through an HDP Data Lake and visualize customer interaction patterns and predict expected behavior using Apigee Insights to identify and initiate the Next-Best-Action for a customer to ensure a superior level of customer experience.
Combine SAS High-Performance Capabilities with Hadoop YARNHortonworks
What if you could assemble all your data in one system and run your critical analytic applications in parallel, regardless of the format, age or location of the data? Today, thanks to the economics of Apache Hadoop-based data platforms, in particular YARN, this is possible.
SAS applications bring highly advanced, in-memory analytic processing to the data in Hadoop and enable a rich set of additional use cases with high performance analytic needs. Join this webinar and learn how, by combining their solutions, Hortonworks and SAS offer more flexibility to choose best of breed SAS HPA and LASR analytic applications in conjunction with trusted Hadoop workloads. Hear how it is possible to leverage Hadoop clusters to extend the power of SAS analytics.
You will hear directly from our experts how SAS HPA and LASR have been integrated with Hadoop YARN to:
Enable a modern data architecture without the need for fragmented processing clusters for each workload.
Ensure low-latency local data access directly from the data nodes.
Create Unified Resource Management window-panes for managing SAS HPA, LASR and HDP resources.
Speakers:
Arun Murthy, Founder and Architect at Hortonworks
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo! He jointly holds the current world sorting record using Apache Hadoop.
Paul Kent, Vice President Big Data at SAS
Paul Kent is Vice President of Big Data initiatives at SAS. He spends his time between Customers, Partners and the Research & Development teams discussing, evangelizing and developing software at the confluence of big data and high performance computing. A datacenter rack full of current-generation 64bit x86 processors represents a very large aggregate memory space, thousands of threads and plentiful IO that can be harnessed to solve problems at a much larger scale than we have traditionally been accustomed to.
Discover HDP 2.2: Apache Falcon for Hadoop Data GovernanceHortonworks
Hortonworks Data Platform 2.2 includes Apache Falcon for Hadoop data governance. In this 30-minute webinar, we discussed why the enterprise needs Falcon for governance, and demonstrated data pipeline construction, policies for data retention and management with Ambari. We also discussed new innovations including: integration of user authentication, data lineage, an improved interface for pipeline management, and the new Falcon capability to establish an automated policy for cloud backup to Microsoft Azure or Amazon S3.
Introduction to the Hortonworks YARN Ready ProgramHortonworks
The recently launched YARN Ready Program will accelerate multi-workload Hadoop in the Enterprise. The program enables developers to integrate new and existing applications with YARN-based Hadoop. We will cover:
--the program and it's benefits
--why it is important to customers
--tools and guides to help you get started
--technical resources to support you
--marketing recognition you can leverage
Discover HDP 2.1: Apache Falcon for Data Governance in HadoopHortonworks
Beginning with HDP 2.1, Hortonworks Data Platform ships with Apache Falcon for Hadoop data governance. Himanshu Bari, Hortonworks senior product manager, and Venkatesh Seetharam, Hortonworks co-founder and committer to Apache Falcon, lead this 30-minute webinar, including:
+ Why you need Apache Falcon
+ Key new Falcon features
+ Demo: Defining data pipelines with replication; policies for retention and late data arrival; managing Falcon server with Ambari
In 2017, more and more corporations are looking to reduce operational overheads in their enterprise data warehouse (EDW) installations. Hortonworks just launched Industry’s first turn key EDW Optimization solution together with our partners Syncsort and AtScale. Join Hortonworks’ CTO Scott Gnau to learn more about this exciting solution and its 3 use cases.
Discover HDP 2.1: Apache Solr for Hadoop SearchHortonworks
Apache Solr is the open source platform for searching data stored in Hadoop. Solr powers search on many of the world's largest Internet sites, enabling powerful full-text search and near real-time indexing. Whether users search for tabular, text, geo-location or sensor data in Hadoop, they find it quickly with Apache Solr. Hortonworks Data Platform 2.1 includes Apache Solr.
In this deck from their 30-minute webinar, Rohit Bakhshi, Hortonworks product manager, and Paul Codding, Hortonworks solution engineer describe how Solr works within HDP's YARN-based architecture.
Hortonworks Protegrity Webinar: Leverage Security in Hadoop Without Sacrifici...Hortonworks
As more data is imported into Hadoop Data Lakes, how can we best secure sensitive data? Recording is at: https://www.brighttalk.com/webcast/9573/171957
What security options are available and what kind of best practices should be implemented? Join our two speakers as they discuss securing HDP data lakes to leverage security in Hadoop without sacrificing usability. Presenters: Vincent Lam, Protegrity - Syed Mahmood, Hortonworks.
You’ll learn about:
· The 5 Pillars of Security for Hadoop
· Open Source HDP Security
· How Hortonworks leverages Protegrity to jointly offer the most robust Hadoop protection available
· The benefits and differences of data protection including tokenization, encryption, and masking
· Leveraging consistent security across Hadoop and beyond for protection of data across its lifecycle
Hive on spark is blazing fast or is it finalHortonworks
This presentation was given at the Strata + Hadoop World, 2015 in San Jose.
Apache Hive is the most popular and most widely used SQL solution for Hadoop. To keep pace with Hadoop’s increasingly vital role in the Enterprise, Hive has transformed from a batch-only, high-latency system into a modern SQL engine capable of both batch and interactive queries over large datasets. Hive’s momentum is accelerating: With Spark integration and a shift to in-memory processing on the horizon, Hive continues to expand the boundaries of Big Data.
In this talk the speakers examined Hive performance, past, present and future. In particular they looked at Hive’s origins as a petabyte scale SQL engine.
Through some numbers and graphs, they showed how Hive became 100x faster by moving beyond MapReduce, by vectorizing execution and by introducing a cost-based optimizer.
They detailed and discussed the challenges of scalable SQL on Hadoop.
The looked into Hive’s sub-second future, powered by LLAP and Hive on Spark.
And showed just how fast Hive on Spark really is.
Implementing a Data Lake with Enterprise Grade Data GovernanceHortonworks
Hadoop provides a powerful platform for data science and analytics, where data engineers and data scientists can leverage myriad data from external and internal data sources to uncover new insight. Such power is also presenting a few new challenges. On the one hand, the business wants more and more self-service, and on the other hand IT is trying to keep up with the demand for data, while maintaining architecture and data governance standards.
In this webinar, Andrew Ahn, Data Governance Initiative Product Manager at Hortonworks, will address the gaps and offer best practices in providing end-to-end data governance in HDP. Andrew Ahn will be followed by Oliver Claude of Waterline Data, who will share a case study of how Waterline Data Inventory works with HDP in the Modern Data Architecture to automate the discovery of business and compliance metadata, data lineage, as well as data quality metrics.
Enterprise Hadoop with Hortonworks and Nimble StorageHortonworks
Join us to learn how Hortonworks Data Platform and Nimble Storage provide an enterprise-ready data platform for multi-workload data processing. HDP supports an array of processing methods — from batch through interactive to real-time, with key capabilities required of an enterprise data platform — spanning Governance, Security and Operations. Nimble Storage provides the performance, capacity, and availability for HDP and allows you to take advantage of Hadoop with minimal changes to existing data architectures and skillsets.
Hortonworks - What's Possible with a Modern Data Architecture?Hortonworks
This is Mark Ledbetter's presentation from the September 22, 2014 Hortonworks webinar “What’s Possible with a Modern Data Architecture?” Mark is vice president for industry solutions at Hortonworks. He has more than twenty-five years experience in the software industry with a focus on Retail and supply chain.
C-BAG Big Data Meetup Chennai Oct.29-2014 Hortonworks and Concurrent on Casca...Hortonworks
Big Data is moving to the next level of maturity and it’s all about the applications. Dhruv Kumar, one of the minds behind Cascading, the most widely used and deployed development framework for building Big Data applications, will discuss how Cascading can enable developers to accelerate the time to market for their data applications, from development to production. In this session, Dhruv will introduce how to easily and reliably develop, test, and scale your data applications and then deploy them on Hadoop and Hortonworks Data Platform. He will show a demo using the Hortonworks Sandbox and Cascading. Recording is at
https://hortonworks.webex.com/hortonworks/lsr.php?RCID=e5582bcbc0516d35fc2dcf0bce86146e
Predictive Analytics and Machine Learning…with SAS and Apache HadoopHortonworks
In this interactive webinar, we'll walk through use cases on how you can use advanced analytics like SAS Visual Statistics and In-Memory Statistic with Hortonworks’ data platform (HDP) to reveal insights in your big data and redefine how your organization solves complex problems.
Hortonworks Data In Motion Series Part 4Hortonworks
How real-world enterprises leverage Hortonworks DataFlow/Apache NiFi to to create real-time data flows in record time to enable new business opportunities, improve customer retention, accelerate big data projects from months to minutes through increased efficiency and reduced costs.
On-Demand webinar: http://hortonworks.com/webinar/paradigm-shift-business-usual-real-time-dataflows-record-time/
Predicting Customer Experience through Hadoop and Customer Behavior GraphsHortonworks
Enhancing a customer experience has become essential for communication service providers to effectively manage customer churn and build a strong, long lasting relationship with their customers. This has become increasingly challenging as customer interactions occur across multiple channels. Understanding customer behavior and how it applies across channels is the key to ensuring the best level of experience is achieved by each customer.
In this webinar Hortonworks and Apigee discuss how service providers can capture and visualize customer behavior across customer interaction points like call center events (IVR and chat) and combine it with network data, to predict customer calls and patterns of digital channel abandonment using Hadoop and predictive analysis and visualization tools..
We will identify ways to develop a 360 degree view across a customer’s household through an HDP Data Lake and visualize customer interaction patterns and predict expected behavior using Apigee Insights to identify and initiate the Next-Best-Action for a customer to ensure a superior level of customer experience.
Combine SAS High-Performance Capabilities with Hadoop YARNHortonworks
What if you could assemble all your data in one system and run your critical analytic applications in parallel, regardless of the format, age or location of the data? Today, thanks to the economics of Apache Hadoop-based data platforms, in particular YARN, this is possible.
SAS applications bring highly advanced, in-memory analytic processing to the data in Hadoop and enable a rich set of additional use cases with high performance analytic needs. Join this webinar and learn how, by combining their solutions, Hortonworks and SAS offer more flexibility to choose best of breed SAS HPA and LASR analytic applications in conjunction with trusted Hadoop workloads. Hear how it is possible to leverage Hadoop clusters to extend the power of SAS analytics.
You will hear directly from our experts how SAS HPA and LASR have been integrated with Hadoop YARN to:
Enable a modern data architecture without the need for fragmented processing clusters for each workload.
Ensure low-latency local data access directly from the data nodes.
Create Unified Resource Management window-panes for managing SAS HPA, LASR and HDP resources.
Speakers:
Arun Murthy, Founder and Architect at Hortonworks
Arun is a Apache Hadoop PMC member and has been a full time contributor to the project since the inception in 2006. He is also the lead of the MapReduce project and has focused on building NextGen MapReduce (YARN). Prior to co-founding Hortonworks, Arun was responsible for all MapReduce code and configuration deployed across the 42,000+ servers at Yahoo! He jointly holds the current world sorting record using Apache Hadoop.
Paul Kent, Vice President Big Data at SAS
Paul Kent is Vice President of Big Data initiatives at SAS. He spends his time between Customers, Partners and the Research & Development teams discussing, evangelizing and developing software at the confluence of big data and high performance computing. A datacenter rack full of current-generation 64bit x86 processors represents a very large aggregate memory space, thousands of threads and plentiful IO that can be harnessed to solve problems at a much larger scale than we have traditionally been accustomed to.
Discover HDP 2.2: Apache Falcon for Hadoop Data GovernanceHortonworks
Hortonworks Data Platform 2.2 includes Apache Falcon for Hadoop data governance. In this 30-minute webinar, we discussed why the enterprise needs Falcon for governance, and demonstrated data pipeline construction, policies for data retention and management with Ambari. We also discussed new innovations including: integration of user authentication, data lineage, an improved interface for pipeline management, and the new Falcon capability to establish an automated policy for cloud backup to Microsoft Azure or Amazon S3.
Introduction to the Hortonworks YARN Ready ProgramHortonworks
The recently launched YARN Ready Program will accelerate multi-workload Hadoop in the Enterprise. The program enables developers to integrate new and existing applications with YARN-based Hadoop. We will cover:
--the program and it's benefits
--why it is important to customers
--tools and guides to help you get started
--technical resources to support you
--marketing recognition you can leverage
Discover HDP 2.1: Apache Falcon for Data Governance in HadoopHortonworks
Beginning with HDP 2.1, Hortonworks Data Platform ships with Apache Falcon for Hadoop data governance. Himanshu Bari, Hortonworks senior product manager, and Venkatesh Seetharam, Hortonworks co-founder and committer to Apache Falcon, lead this 30-minute webinar, including:
+ Why you need Apache Falcon
+ Key new Falcon features
+ Demo: Defining data pipelines with replication; policies for retention and late data arrival; managing Falcon server with Ambari
In 2017, more and more corporations are looking to reduce operational overheads in their enterprise data warehouse (EDW) installations. Hortonworks just launched Industry’s first turn key EDW Optimization solution together with our partners Syncsort and AtScale. Join Hortonworks’ CTO Scott Gnau to learn more about this exciting solution and its 3 use cases.
Discover HDP 2.1: Apache Solr for Hadoop SearchHortonworks
Apache Solr is the open source platform for searching data stored in Hadoop. Solr powers search on many of the world's largest Internet sites, enabling powerful full-text search and near real-time indexing. Whether users search for tabular, text, geo-location or sensor data in Hadoop, they find it quickly with Apache Solr. Hortonworks Data Platform 2.1 includes Apache Solr.
In this deck from their 30-minute webinar, Rohit Bakhshi, Hortonworks product manager, and Paul Codding, Hortonworks solution engineer describe how Solr works within HDP's YARN-based architecture.
Hortonworks Protegrity Webinar: Leverage Security in Hadoop Without Sacrifici...Hortonworks
As more data is imported into Hadoop Data Lakes, how can we best secure sensitive data? Recording is at: https://www.brighttalk.com/webcast/9573/171957
What security options are available and what kind of best practices should be implemented? Join our two speakers as they discuss securing HDP data lakes to leverage security in Hadoop without sacrificing usability. Presenters: Vincent Lam, Protegrity - Syed Mahmood, Hortonworks.
You’ll learn about:
· The 5 Pillars of Security for Hadoop
· Open Source HDP Security
· How Hortonworks leverages Protegrity to jointly offer the most robust Hadoop protection available
· The benefits and differences of data protection including tokenization, encryption, and masking
· Leveraging consistent security across Hadoop and beyond for protection of data across its lifecycle
Hive on spark is blazing fast or is it finalHortonworks
This presentation was given at the Strata + Hadoop World, 2015 in San Jose.
Apache Hive is the most popular and most widely used SQL solution for Hadoop. To keep pace with Hadoop’s increasingly vital role in the Enterprise, Hive has transformed from a batch-only, high-latency system into a modern SQL engine capable of both batch and interactive queries over large datasets. Hive’s momentum is accelerating: With Spark integration and a shift to in-memory processing on the horizon, Hive continues to expand the boundaries of Big Data.
In this talk the speakers examined Hive performance, past, present and future. In particular they looked at Hive’s origins as a petabyte scale SQL engine.
Through some numbers and graphs, they showed how Hive became 100x faster by moving beyond MapReduce, by vectorizing execution and by introducing a cost-based optimizer.
They detailed and discussed the challenges of scalable SQL on Hadoop.
The looked into Hive’s sub-second future, powered by LLAP and Hive on Spark.
And showed just how fast Hive on Spark really is.
Implementing a Data Lake with Enterprise Grade Data GovernanceHortonworks
Hadoop provides a powerful platform for data science and analytics, where data engineers and data scientists can leverage myriad data from external and internal data sources to uncover new insight. Such power is also presenting a few new challenges. On the one hand, the business wants more and more self-service, and on the other hand IT is trying to keep up with the demand for data, while maintaining architecture and data governance standards.
In this webinar, Andrew Ahn, Data Governance Initiative Product Manager at Hortonworks, will address the gaps and offer best practices in providing end-to-end data governance in HDP. Andrew Ahn will be followed by Oliver Claude of Waterline Data, who will share a case study of how Waterline Data Inventory works with HDP in the Modern Data Architecture to automate the discovery of business and compliance metadata, data lineage, as well as data quality metrics.
Simplify and Secure your Hadoop Environment with Hortonworks and CentrifyHortonworks
Join this webinar to explore Hadoop security challenges and trends, learn how to simply the connection of your Hortonworks Data Platform to your existing Active Directory infrastructure and hear about real world examples of organizations that are achieving the following benefits:
- Secured Hortonworks environments thanks to Active Directory infrastructure for identity and authentication.
- Increased productivity and security via single sign-on for IT admins and Hadoop users.
- Least privilege and session monitoring for privileged access to Hortonworks clusters.
Webinar URL: http://hortonworks.com/webinar/simplify-and-secure-your-hadoop-environment-with-hortonworks-and-centrify/
Faster Batch Processing with Cloudera 5.7: Hive-on-Spark is ready for productionCloudera, Inc.
It’s no secret that Apache Spark is becoming the successor to MapReduce for data processing in Hadoop. With it’s easy development, flexible API, and performance benefits, Spark is a powerful data processing engine that has quickly gained popularity within the community. On the other hand Hive continues to be the most widely used data warehouse/ETL engine with large scale adoption across enterprises. Therefore, it’s imperative to enable Spark as the underlying execution engine for Hive to seamlessly allow existing and future Hive workloads to leverage the advantages of Spark.
With the recent release of Cloudera 5.7, we have delivered on this goal by adding support for Hive-on-Spark. Data engineers and ETL developers can now transition from MR to Spark for their Hive workloads seamlessly thereby benefitting from the advantages of Spark without any disruption on their end.
Join Santosh Kumar, Senior Product Manager at Cloudera, and Rui Li, Apache Hive committer and engineer at Intel, as we discuss:
An Introduction to Spark and its advantages over MR
An introduction of Hive-on-Spark: Goals and Design Principles
Migrating to HoS and a live demo
Configuring and tuning for batch workloads
What’s next for both tools
Discover HDP 2.2: Comprehensive Hadoop Security with Apache Ranger and Apache...Hortonworks
This presentation was included in a 30-minute webinar Balaji Ganesan, Hortonworks senior director for enterprise security strategy and Vinay Shukla, director of product management.
They discussed Hortonworks Data Platform 2.2’s features for delivering comprehensive security in HDP.
Balaji and Vinay discussed Apache Ranger and Apache Knox and how they are integrated in HDP 2.2 to provide fine grain authorization, auditing and API security that can be centrally administered.
Hortonworks tech workshop in-memory processing with sparkHortonworks
Apache Spark offers unique in-memory capabilities and is well suited to a wide variety of data processing workloads including machine learning and micro-batch processing. With HDP 2.2, Apache Spark is a fully supported component of the Hortonworks Data Platform. In this session we will cover the key fundamentals of Apache Spark and operational best practices for executing Spark jobs along with the rest of Big Data workloads. We will also provide a working example to showcase micro-batch and machine learning processing using Apache Spark.
Hortonworks Technical Workshop: What's New in HDP 2.3Hortonworks
The recently launched HDP 2.3 is a major advancement of Open Enterprise Hadoop. It represents the best of community led development with innovations spanning Apache Hadoop, Apache Ambari, Ranger, HBase, Spark and Storm. In this session we will provide an in-depth overview of new functionality and discuss it's impact on new and ongoing big data initiatives.
Hortonworks Technical Workshop: HBase For Mission Critical ApplicationsHortonworks
HBase adoption continues to explode amid rapid customer success and unbridled innovation. HBase with its limitless scalability, high reliability and deep integration with Hadoop ecosystem tools, offers enterprise developers a rich platform on which to build their next generation applications. In this workshop we will explore HBase SQL capabilities, deep Hadoop ecosystem integrations and deployment & management best practices.
Apache Ambari is the only 100% open source management and provisioning tool for Apache Hadoop and Hortonworks Data Platform (HDP). Recent innovations of Apache Ambari have focused on opening Apache Ambari into a pluggable management platform that can automate cluster provisioning, deploy 3rd party software and provide custom operational and developers views to the end user. In this session Hortonworks will cover 3 key integration points of Apache Ambari including Stacks, Views and Blueprints and deliver working examples of each.
Effective data governance is imperative to the success of Data Lake initiatives. Without governance policies and processes, information discovery and analysis is severely impaired. In this session we will provide an in-depth look into the Data Governance Initiative launched collaboratively between Hortonworks and partners from across industries. We will cover the objectives of Data Governance Initiatives and demonstrate key governance capabilities of the Hortonworks Data Platform.
Hortonworks technical workshop operations with ambariHortonworks
Ambari continues on its journey of provisioning, monitoring and managing enterprise Hadoop deployments. With 2.0, Apache Ambari brings a host of new capabilities including updated metric collections; Kerberos setup automation and developer views for Big Data developers. In this Hortonworks Technical Workshop session we will provide an in-depth look into Apache Ambari 2.0 and showcase security setup automation using Ambari 2.0. View the recording at https://www.brighttalk.com/webcast/9573/155575. View the github demo work at https://github.com/abajwa-hw/ambari-workshops/blob/master/blueprints-demo-security.md. Recorded May 28, 2015.
The path to a Modern Data Architecture in Financial ServicesHortonworks
Delivering Data-Driven Applications at the Speed of Business: Global Banking AML use case.
Chief Data Officers in financial services have unique challenges: they need to establish an effective data ecosystem under strict governance and regulatory requirements. They need to build the data-driven applications that enable risk and compliance initiatives to run efficiently. In this webinar, we will discuss the case of a global banking leader and the anti-money laundering solution they built on the data lake. With a single platform to aggregate structured and unstructured information essential to determine and document AML case disposition, they reduced mean time for case resolution by 75%. They have a roadmap for building over 150 data-driven applications on the same search-based data discovery platform so they can mitigate risks and seize opportunities, at the speed of business.
Curb your insecurity with HDP - Tips for a Secure Clusterahortonworks
NOTE: Slides contains gifs which may appear as dark pics.
You got your cluster installed and configured. You celebrate, until the party is ruined by your company's Security officer stamping a big "Deny" on your Hadoop cluster. And oops!! You cannot place any data onto the cluster until you can demonstrate it is secure. In this session you will learn the tips and tricks to fully secure your cluster for data at rest, data in motion and all the apps including Spark. Your Security officer can then join your Hadoop revelry (unless you don't authorize him to, with your newly acquired admin rights)
In this talk, we will give an overview of the deep learning space starting with a brief history. We will distinguish between deep learning hype vs practical real-world applications, cover how deep learning differs from other machine learning algorithms, go over sample neural net architectures, and provide a step-by-step guide on how to get started.
Specifically, we will cover what type of training data is required and how to prepare it with Apache Spark, followed by how to choose a correct neural net architecture, train, and deploy a deep learning model with TensorFlow on Apache Hadoop 3.1.
Finally, we will wrap-up with deep learning challenges and shortcomings, and offer short- and long-term recommendations to successfully train and deploy deep learning models within your organization to maximize return on investment.
Introduction: This workshop will provide a hands-on introduction to Machine Learning (ML) with an overview of Deep Learning (DL).
Format: An introductory lecture on several supervised and unsupervised ML techniques followed by light introduction to DL and short discussion what is current state-of-the-art. Several python code samples using the scikit-learn library will be introduced that users will be able to run in the Cloudera Data Science Workbench (CDSW).
Objective: To provide a quick and short hands-on introduction to ML with python’s scikit-learn library. The environment in CDSW is interactive and the step-by-step guide will walk you through setting up your environment, to exploring datasets, training and evaluating models on popular datasets. By the end of the crash course, attendees will have a high-level understanding of popular ML algorithms and the current state of DL, what problems they can solve, and walk away with basic hands-on experience training and evaluating ML models.
Prerequisites: For the hands-on portion, registrants must bring a laptop with a Chrome or Firefox web browser. These labs will be done in the cloud, no installation needed. Everyone will be able to register and start using CDSW after the introductory lecture concludes (about 1hr in). Basic knowledge of python highly recommended.
This workshop will provide a hands on introduction to basic Machine Learning techniques with Apache Spark ML using the cloud.
Format: A short introductory lecture on a select important supervised and unsupervised Machine Learning techniques followed by a demo, lab exercises and a Q&A session. The lecture will be followed by lab time to work through the lab exercises and ask questions.
Objective: To provide a quick and short hands-on introduction to Machine Learning with Spark ML. In the lab, you will use the following components: Apache Zeppelin (a “Modern Data Science Toolbox”) and Apache Spark. You will learn how to analyze the data, structure the data, train Machine Learning models and apply them to answer real-world questions.
Pre-requisites: Registrants must bring a laptop that can run the Hortonworks Data Cloud.
At this Crash Course everyone will have a cluster assigned to them to try several workloads using Machine Learning, Spark and Zeppelin on the cloud.
Speakers: Robert Hryniewicz
Introduction: This workshop will provide a hands-on introduction to Machine & Deep Learning.
Format: An introductory lecture on several supervised and unsupervised Machine Learning techniques followed by light introduction to Deep Learning. Both Apache Spark as well as TensorFlow will be introduced with relevant code samples that users can run in the cloud and explore.
Objective: To provide a quick and short hands-on introduction to Machine Learning with Spark Machine Learning library (MLlib) and Deep Learning with TensorFlow. In the lab, you will use the following components: Apache Zeppelin and Jupyter notebooks with Apache Spark and TensorFlow processing engines (respectively). You will learn how to analyze and structure data, train Machine Learning models and apply them to answer real-world questions. You will also learn how to select, train, and test Deep Learning models.
Prerequisites: Registrants must bring a laptop with a Chrome or Firefox web browser installed (with proxies disabled, i.e. must show venue IP to access cloud resources). These labs will be done in the cloud. At this Crash Course everyone will be assigned a cluster to try several workloads using Apache Spark and TensorFlow in Zeppelin and Jupyter notebooks (respectively) hosted in the cloud.
1P A R T Introduction to Analytics and AIITatianaMajor22
1
P A R T
Introduction to
Analytics and AI
I
2
1
LEARNING OBJECTIVES
Overview of Business
Intelligence, Analytics, Data
Science, and Artificial Intelligence:
Systems for Decision Support
■■ Understand the need for computerized support of
managerial decision making
■■ Understand the development of systems for
providing decision-making support
■■ Recognize the evolution of such computerized
support to the current state of analytics/data
science and artificial intelligence
■■ Describe the business intelligence (BI)
methodology and concepts
■■ Understand the different types of analytics and
review selected applications
■■ Understand the basic concepts of artificial
intelligence (AI) and see selected applications
■■ Understand the analytics ecosystem to identify
various key players and career opportunities
T he business environment (climate) is constantly changing, and it is becoming more and more complex. Organizations, both private and public, are under pres-sures that force them to respond quickly to changing conditions and to be in-
novative in the way they operate. Such activities require organizations to be agile and to
make frequent and quick strategic, tactical, and operational decisions, some of which are
very complex. Making such decisions may require considerable amounts of relevant data,
information, and knowledge. Processing these in the framework of the needed decisions
must be done quickly, frequently in real time, and usually requires some computerized
support. As technologies are evolving, many decisions are being automated, leading to a
major impact on knowledge work and workers in many ways.
This book is about using business analytics and artificial intelligence (AI) as a
computerized support portfolio for managerial decision making. It concentrates on the
C H A P T E R
Chapter 1 • Overview of Business Intelligence, Analytics, Data Science, and Artificial Intelligence 3
theoretical and conceptual foundations of decision support as well as on the commercial
tools and techniques that are available. The book presents the fundamentals of the tech-
niques and the manner in which these systems are constructed and used. We follow an
EEE (exposure, experience, and exploration) approach to introducing these topics. The
book primarily provides exposure to various analytics/AI techniques and their applica-
tions. The idea is that students will be inspired to learn from how various organizations
have employed these technologies to make decisions or to gain a competitive edge. We
believe that such exposure to what is being accomplished with analytics and that how
it can be achieved is the key component of learning about analytics. In describing the
techniques, we also give examples of specific software tools that can be used for devel-
oping such applications. However, the book is not limited to any one software tool, so
students can experience these techniques using any number of availa ...
It is almost impossible to escape the topic of Data Science. While the core of Data Science has remained the same over the last decade, it’s emergence to the forefront is spurred by both the availability of new data types and a true realization of the value that it delivers. In this session, we will provide an overview of data science, the different classes of machine learning algorithm and deliver an end-to-end demonstration of performing Machine Learning Using Hadoop. Audience: Developers, Data Scientist Architects and System Engineers.
Recording: https://hortonworks.webex.com/hortonworks/lsr.php?RCID=4175a7421d00257f33df146f50c41af8
Developing a curriculum and mobile application to deliver
training and information relating to Assistive Technology by Karel Van Isacker, Jacqui Lewis and Stavroula Papavasiliou
[Hortonworks] Future Of Data: Madrid - HDF & Data in motionRaúl Marín
First meetup event Future Of Data event where we introduced Hortonworks DataFlow (HDF).
The slides describe what HDF is, and we presented a very simple demo about sentiment analysis of tweets using Apache OpenNLP as the NLP framework to do so.
Overcoming the AI hype — and what enterprises should really focus onDataWorks Summit
Deep learning for all its hype is brittle, non-generalizeable, and its learnings are not readily transferable from one application to another. Since we are unlikely to see anything close to artificial general intelligence in the next few decades., we should instead focus on how enterprises can capitalize on the state of the art in machine learning and re-implement successful algorithms and follow the data science lifecycles that generate highest ROI.
This talk will cover the current state of the art in AI, its limits vs. hype, and discuss concrete steps that enterprises can take to achieve desired ROI by re-implementing production-grade-ready machine learning algorithms, that have been hardened and demonstrated to work very well in specific, constrained domains.
By the end of this talk, attendees should have a better grasp on how to avoid costly and unnecessary investments into yet unproven technologies, be better equipped to navigate the complex space of AI, and understand where to best focus their resources to maximize ROI. ROBERT HRYNIEWICZ, Technical Evangelist, Hortonworks
Academic presentation - Future Skill Force DevelopmentSenthilkumar R
Future Skills Grant Program for Post-Pandemic Recovery Assistance
EON Reality’s grant program provides academic institutions and governmental organizations with support to fund the launch of new XR programs in the post-pandemic world. Based on decades of experience working with governments and intergovernmental agencies, EON Reality helps to identify and locate grants to cover a significant amount of the cost of a new XR Center for qualifying partners.
Similar to Deep learning with Hortonworks and Apache Spark - Hortonworks technical workshop (20)
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next LevelHortonworks
The HDF 3.3 release delivers several exciting enhancements and new features. But, the most noteworthy of them is the addition of support for Kafka 2.0 and Kafka Streams.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-3-taking-stream-processing-next-level/
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT StrategyHortonworks
Forrester forecasts* that direct spending on the Internet of Things (IoT) will exceed $400 Billion by 2023. From manufacturing and utilities, to oil & gas and transportation, IoT improves visibility, reduces downtime, and creates opportunities for entirely new business models.
But successful IoT implementations require far more than simply connecting sensors to a network. The data generated by these devices must be collected, aggregated, cleaned, processed, interpreted, understood, and used. Data-driven decisions and actions must be taken, without which an IoT implementation is bound to fail.
https://hortonworks.com/webinar/iot-predictions-2019-beyond-data-heart-iot-strategy/
Getting the Most Out of Your Data in the Cloud with CloudbreakHortonworks
Cloudbreak, a part of Hortonworks Data Platform (HDP), simplifies the provisioning and cluster management within any cloud environment to help your business toward its path to a hybrid cloud architecture.
https://hortonworks.com/webinar/getting-data-cloud-cloudbreak-live-demo/
Johns Hopkins - Using Hadoop to Secure Access Log EventsHortonworks
In this webinar, we talk with experts from Johns Hopkins as they share techniques and lessons learned in real-world Apache Hadoop implementation.
https://hortonworks.com/webinar/johns-hopkins-using-hadoop-securely-access-log-events/
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad GuysHortonworks
Cybersecurity today is a big data problem. There’s a ton of data landing on you faster than you can load, let alone search it. In order to make sense of it, we need to act on data-in-motion, use both machine learning, and the most advanced pattern recognition system on the planet: your SOC analysts. Advanced visualization makes your analysts more efficient, helps them find the hidden gems, or bombs in masses of logs and packets.
https://hortonworks.com/webinar/catch-hacker-real-time-live-visuals-bots-bad-guys/
We have introduced several new features as well as delivered some significant updates to keep the platform tightly integrated and compatible with HDP 3.0.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-2-release-raises-bar-operational-efficiency/
Curing Kafka Blindness with Hortonworks Streams Messaging ManagerHortonworks
With the growth of Apache Kafka adoption in all major streaming initiatives across large organizations, the operational and visibility challenges associated with Kafka are on the rise as well. Kafka users want better visibility in understanding what is going on in the clusters as well as within the stream flows across producers, topics, brokers, and consumers.
With no tools in the market that readily address the challenges of the Kafka Ops teams, the development teams, and the security/governance teams, Hortonworks Streams Messaging Manager is a game-changer.
https://hortonworks.com/webinar/curing-kafka-blindness-hortonworks-streams-messaging-manager/
Interpretation Tool for Genomic Sequencing Data in Clinical EnvironmentsHortonworks
The healthcare industry—with its huge volumes of big data—is ripe for the application of analytics and machine learning. In this webinar, Hortonworks and Quanam present a tool that uses machine learning and natural language processing in the clinical classification of genomic variants to help identify mutations and determine clinical significance.
Watch the webinar: https://hortonworks.com/webinar/interpretation-tool-genomic-sequencing-data-clinical-environments/
IBM+Hortonworks = Transformation of the Big Data LandscapeHortonworks
Last year IBM and Hortonworks jointly announced a strategic and deep partnership. Join us as we take a close look at the partnership accomplishments and the conjoined road ahead with industry-leading analytics offers.
View the webinar here: https://hortonworks.com/webinar/ibmhortonworks-transformation-big-data-landscape/
In this exclusive Premier Inside Out, you will hear from Druid committer Slim Bouguerra, Staff Software Engineer and Product Manager Will Xu. These Hortonworkers will explain the vision of these components, review new features, share some best practices and answer your questions.
View the webinar here: https://hortonworks.com/webinar/hortonworks-premier-apache-druid/
Accelerating Data Science and Real Time Analytics at ScaleHortonworks
Gaining business advantages from big data is moving beyond just the efficient storage and deep analytics on diverse data sources to using AI methods and analytics on streaming data to catch insights and take action at the edge of the network.
https://hortonworks.com/webinar/accelerating-data-science-real-time-analytics-scale/
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATAHortonworks
Thanks to sensors and the Internet of Things, industrial processes now generate a sea of data. But are you plumbing its depths to find the insight it contains, or are you just drowning in it? Now, Hortonworks and Seeq team to bring advanced analytics and machine learning to time-series data from manufacturing and industrial processes.
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...Hortonworks
Trimble Transportation Enterprise is a leading provider of enterprise software to over 2,000 transportation and logistics companies. They have designed an architecture that leverages Hortonworks Big Data solutions and Machine Learning models to power up multiple Blockchains, which improves operational efficiency, cuts down costs and enables building strategic partnerships.
https://hortonworks.com/webinar/blockchain-with-machine-learning-powered-by-big-data-trimble-transportation-enterprise/
Delivering Real-Time Streaming Data for Healthcare Customers: ClearsenseHortonworks
For years, the healthcare industry has had problems of data scarcity and latency. Clearsense solved the problem by building an open-source Hortonworks Data Platform (HDP) solution while providing decades worth of clinical expertise. Clearsense is delivering smart, real-time streaming data, to its healthcare customers enabling mission-critical data to feed clinical decisions.
https://hortonworks.com/webinar/delivering-smart-real-time-streaming-data-healthcare-customers-clearsense/
Making Enterprise Big Data Small with EaseHortonworks
Every division in an organization builds its own database to keep track of its business. When the organization becomes big, those individual databases grow as well. The data from each database may become silo-ed and have no idea about the data in the other database.
https://hortonworks.com/webinar/making-enterprise-big-data-small-ease/
Driving Digital Transformation Through Global Data ManagementHortonworks
Using your data smarter and faster than your peers could be the difference between dominating your market and merely surviving. Organizations are investing in IoT, big data, and data science to drive better customer experience and create new products, yet these projects often stall in ideation phase to a lack of global data management processes and technologies. Your new data architecture may be taking shape around you, but your goal of globally managing, governing, and securing your data across a hybrid, multi-cloud landscape can remain elusive. Learn how industry leaders are developing their global data management strategy to drive innovation and ROI.
Presented at Gartner Data and Analytics Summit
Speaker:
Dinesh Chandrasekhar
Director of Product Marketing, Hortonworks
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming FeaturesHortonworks
Hortonworks DataFlow (HDF) is the complete solution that addresses the most complex streaming architectures of today’s enterprises. More than 20 billion IoT devices are active on the planet today and thousands of use cases across IIOT, Healthcare and Manufacturing warrant capturing data-in-motion and delivering actionable intelligence right NOW. “Data decay” happens in a matter of seconds in today’s digital enterprises.
To meet all the needs of such fast-moving businesses, we have made significant enhancements and new streaming features in HDF 3.1.
https://hortonworks.com/webinar/series-hdf-3-1-technical-deep-dive-new-streaming-features/
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...Hortonworks
Join the Hortonworks product team as they introduce HDF 3.1 and the core components for a modern data architecture to support stream processing and analytics.
You will learn about the three main themes that HDF addresses:
Developer productivity
Operational efficiency
Platform interoperability
https://hortonworks.com/webinar/series-hdf-3-1-redefining-data-motion-modern-data-architectures/
Unlock Value from Big Data with Apache NiFi and Streaming CDCHortonworks
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. It’s being used across industries on large amounts of data that had stored in isolation which made collaboration and analysis difficult.
Join industry experts from Hortonworks and Attunity as they explain how Apache NiFi and streaming CDC technology provides a distributed, resilient platform for unlocking the value of data in new ways.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.