Verizon Global Technology Services (GTS) was challenged by a multi-tier, labor-intensive process when trying to migrate data from disparate sources into a data lake to create financial reports and business insights.
View the webinar on-demand here: https://hortonworks.com/webinar/verizon-centralizes-data-into-data-lake/
Powering Big Data Success On-Prem and in the CloudHortonworks
How do you optimize Apache Spark workloads in the cloud? How do you tune your resources for maximum performance and efficiency? Find out how the new Hortonworks Flex support subscriptions enables IT agility and success in the cloud. We will cover:
* Options for running Data Science, Analytics and ETL workloads in the cloud
* Hortonworks support offerings including new Flex Support Subscription
* How to run Cloud workloads more efficiently with SmartSense
* Case study on the impact of SmartSense
https://hortonworks.com/webinar/powering-big-data-success-cloud/
Your Self-Driving Car - How Did it Get So Smart?Hortonworks
As we all can appreciate, “teaching” a vehicle to drive under the full range of conditions it will encounter (i.e. road conditions, weather conditions, behavior of other vehicles) is a daunting proposition. If merely the thought of this makes you nervous, you’re not alone – according to the American Automobile Association (AAA), 75 percent of consumers are not yet ready to embrace self-driving cars. However, that is the very challenge facing automakers – teaching vehicles to unfailingly assess and respond to any combination of operational conditions “on-the-fly” through discrete rules (algorithms) governing a vehicle’s behavior.
Join Hortonworks and NorCom at the upcoming webinar as we discuss:
•Evolution of the autonomous driving
•Traditional data management approaches and main challenges associated with them
•How NorCom and Hortonworks can address those challenges and accelerate the pace of autonomous development
Modernize Your Existing EDW with IBM Big SQL & Hortonworks Data PlatformHortonworks
Find out how Hortonworks and IBM help you address these challenges to enable success to optimize your existing EDW environment.
https://hortonworks.com/webinar/modernize-existing-edw-ibm-big-sql-hortonworks-data-platform/
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
YARN Ready: Integrating to YARN with Tez Hortonworks
YARN Ready webinar series helps developers integrate their applications to YARN. Tez is one vehicle to do that. We take a deep dive including code review to help you get started.
In 2017, more and more corporations are looking to reduce operational overheads in their enterprise data warehouse (EDW) installations. Hortonworks just launched Industry’s first turn key EDW Optimization solution together with our partners Syncsort and AtScale. Join Hortonworks’ CTO Scott Gnau to learn more about this exciting solution and its 3 use cases.
Powering Big Data Success On-Prem and in the CloudHortonworks
How do you optimize Apache Spark workloads in the cloud? How do you tune your resources for maximum performance and efficiency? Find out how the new Hortonworks Flex support subscriptions enables IT agility and success in the cloud. We will cover:
* Options for running Data Science, Analytics and ETL workloads in the cloud
* Hortonworks support offerings including new Flex Support Subscription
* How to run Cloud workloads more efficiently with SmartSense
* Case study on the impact of SmartSense
https://hortonworks.com/webinar/powering-big-data-success-cloud/
Your Self-Driving Car - How Did it Get So Smart?Hortonworks
As we all can appreciate, “teaching” a vehicle to drive under the full range of conditions it will encounter (i.e. road conditions, weather conditions, behavior of other vehicles) is a daunting proposition. If merely the thought of this makes you nervous, you’re not alone – according to the American Automobile Association (AAA), 75 percent of consumers are not yet ready to embrace self-driving cars. However, that is the very challenge facing automakers – teaching vehicles to unfailingly assess and respond to any combination of operational conditions “on-the-fly” through discrete rules (algorithms) governing a vehicle’s behavior.
Join Hortonworks and NorCom at the upcoming webinar as we discuss:
•Evolution of the autonomous driving
•Traditional data management approaches and main challenges associated with them
•How NorCom and Hortonworks can address those challenges and accelerate the pace of autonomous development
Modernize Your Existing EDW with IBM Big SQL & Hortonworks Data PlatformHortonworks
Find out how Hortonworks and IBM help you address these challenges to enable success to optimize your existing EDW environment.
https://hortonworks.com/webinar/modernize-existing-edw-ibm-big-sql-hortonworks-data-platform/
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
YARN Ready: Integrating to YARN with Tez Hortonworks
YARN Ready webinar series helps developers integrate their applications to YARN. Tez is one vehicle to do that. We take a deep dive including code review to help you get started.
In 2017, more and more corporations are looking to reduce operational overheads in their enterprise data warehouse (EDW) installations. Hortonworks just launched Industry’s first turn key EDW Optimization solution together with our partners Syncsort and AtScale. Join Hortonworks’ CTO Scott Gnau to learn more about this exciting solution and its 3 use cases.
Starting Small and Scaling Big with Hadoop (Talend and Hortonworks webinar)) ...Hortonworks
No matter if you are new to Hadoop or have a mature cluster in production, scale will be a critical factor of your success with Hadoop. Are you ready to take the next big step as you scale out your data architecture?
Talend and Hortonworks discuss where we will help you learn how to implement an effective big data and Hadoop strategy across your IT infrastructure. You will learn:
How to grow a pilot into production
How to scale-out architecture & systems affordably
How to leverage the flexibility of Hadoop to optimize your data integration processes
Recording: http://www.talend.com/resources/webinars/starting-small-and-scaling-big-with-hadoop
Hortonworks Data In Motion Webinar Series Pt. 2Hortonworks
How Hortonworks DataFlow (HDF), powered by Apache NIFi, MiNiFi, Kafka and Storm, and it’s associated HDF Certification Program make it easier and faster to integrate different systems together. Highlights on the latest partner integrations from HPE, SAS, Attunity, Impetus Technologies, Kepware and Midfin Systems. “
Watch the webinar on-demand: http://hortonworks.com/webinar/make-big-data-ecosystem-work-better/
HDF Partner certification program: http://hortonworks.com/partners/product-integration-certification/#hdf-integration
A Comprehensive Approach to Building your Big Data - with Cisco, Hortonworks ...Hortonworks
Companies in every industry look for ways to explore new data types and large data sets that were previously too big to capture, store and process. They need to unlock insights from data such as clickstream, geo-location, sensor, server log, social, text and video data. However, becoming a data-first enterprise comes with many challenges.
Join this webinar organized by three leaders in their respective fields and learn from our experts how you can accelerate the implementation of a scalable, cost-efficient and robust Big Data solution. Cisco, Hortonworks and Red Hat will explore how new data sets can enrich existing analytic applications with new perspectives and insights and how they can help you drive the creation of innovative new apps that provide new value to your business.
Hortonworks Protegrity Webinar: Leverage Security in Hadoop Without Sacrifici...Hortonworks
As more data is imported into Hadoop Data Lakes, how can we best secure sensitive data? Recording is at: https://www.brighttalk.com/webcast/9573/171957
What security options are available and what kind of best practices should be implemented? Join our two speakers as they discuss securing HDP data lakes to leverage security in Hadoop without sacrificing usability. Presenters: Vincent Lam, Protegrity - Syed Mahmood, Hortonworks.
You’ll learn about:
· The 5 Pillars of Security for Hadoop
· Open Source HDP Security
· How Hortonworks leverages Protegrity to jointly offer the most robust Hadoop protection available
· The benefits and differences of data protection including tokenization, encryption, and masking
· Leveraging consistent security across Hadoop and beyond for protection of data across its lifecycle
The world’s largest enterprises run their infrastructure on Oracle, DB2 and SQL and their critical business operations on SAP applications. Organisations need this data to be available in real-time to conduct necessary analytics. However, delivering this heterogeneous data at the speed it’s required can be a huge challenge because of the complex underlying data models and structures and legacy manual processes which are prone to errors and delays.
Unlock these silos of data and enable the new advanced analytics platforms by attending this session.
Find out how to:
• To overcome common challenges faced by enterprises trying to access their SAP data
• You can integrate SAP data in real-time with change data capture (CDC) technology
• Organisations are using Attunity Replicate for SAP to stream SAP data in to Kafka
Powering Fast Data and the Hadoop Ecosystem with VoltDB and HortonworksHortonworks
Developers increasingly are building dynamic, interactive real-time applications on fast streaming data to extract maximum value from data in the moment. To do so requires a data pipeline, the ability to make transactional decisions against state, and an export functionality that pushes data at high speeds to long-term Hadoop analytics stores like Hortonworks Data Platform (HDP). This enables data to arrive in your analytic store sooner, and allows these analytics to be leveraged with radically lower latency.
But successfully writing fast data applications that manage, process, and export streams of data generated from mobile, smart devices, sensors and social interactions is a big challenge.
Join Hortonworks and VoltDB, an in-memory scale-out relational database that simplifies fast data application development, to learn how you can ingest large volumes of fast-moving, streaming data and process it in real time. We will also cover how developing fast data applications is simplified, faster - and delivers more value when built on a fast in-memory, scale-out SQL database.
Discover HDP 2.2: Even Faster SQL Queries with Apache Hive and Stinger.nextHortonworks
Earlier this year, the Apache open source community delivered the Stinger Initiative to improve speed, scale and SQL semantics in Apache Hive. Now Stinger.next is underway, to build on those initial successes.
In this presentation, from a webinar hosted by Hortonworks co-founder Alan Gates and Hortonworks Hive product manager Raj Baines, you can learn more about Stinger.next and innovation in Apache Hive.
Alan and Raj cover new Hive functionality for more speed, scale and SQL in HDP 2.2. Specific topics include transactions with ACID semantics, the cost based optimizer and dynamic query optimizations.
The presentation also shows future plans for the Stinger.next initiative.
Delivering a Flexible IT Infrastructure for Analytics on IBM Power SystemsHortonworks
Customers are preparing themselves to analyze and manage an increasing quantity of structured and unstructured data. Business leaders introduce new analytical workloads faster than what IT departments can handle. Legacy IT infrastructure needs to evolve to deliver operational improvements and cost containment, while increasing flexibility to meet future requirements. By providing HDP on IBM Power Systems, Hortonworks and IBM are giving customers have more choice in selecting the appropriate architectural platform that is right for them. In this webinar, we’ll discuss some of the challenges with deploying big data platforms, and how choosing solutions built with HDP on IBM Power Systems can offer tangible benefits and flexibility to accommodate changing needs.
Rescue your Big Data from Downtime with HP Operations Bridge and Apache HadoopHortonworks
How can you simplify the management and monitoring of your Hadoop environment? Ensure IT can focus on the right business priorities supported by Hadoop? Take a look at this presentation and learn how you can simplify the management and monitoring of your Hadoop environment, and ensure IT can focus on the right business priorities supported by Hadoop.
The newly enacted GDPR regulations which become effective in 2018 require comprehensive protection of personal information of EU subjects. In this paper, we outline a solution that discovers and classifies personal data that is subject to GDPR in Hadoop ecosystem and uses such precise classification to automatically create a robust set of policies for authorization. The solution consists of using Dataguise’s DgSecure sensitive data detection to automatically classify sensitive data assets in Apache Atlas and author comprehensive and robust authorization policies via Apache Ranger. DgSecure is used to detect sensitive data in Hive databases and continuously update the classification in Apache Atlas via tags. Apache Atlas tags are used to create Apache Ranger policies that protect access to sensitive HDFS files, Hive tables, and Hive columns. We demonstrate a workflow where the components of the solution are automated requiring little or no manual intervention to provide protection of such sensitive data in Hadoop clusters.
Optimizing your Modern Data Architecture - with Attunity, RCG Global Services...Hortonworks
Today’s enterprises are challenged with capturing large amounts of data from a number of sources in a variety of formats, and then storing it in a cost-effective, timely manner. With your current data warehouse, this may seem overwhelming. It doesn’t have to be. With a Hadoop-based modern data warehouse, you can overcome these challenges and get meaningful insights from real-time data.
Want to learn how? Join experts from Attunity, Hortonworks, and RCG Global Services for a live webinar - where we will be discussing enterprise data warehouse optimization. You will learn how to:
•Rebalance your data warehouse by identifying unused data and resource-intensive workloads that can be moved to Hadoop.
•Seamlessly integrate your current enterprise data warehouse with a Modern Data Architecture.
•Better utilize data assets to reduce costs while realizing more value from your data.
•Develop a roadmap for implementing the Hadoop-based Modern Data Architecture and Data Lake.
Webinar - Accelerating Hadoop Success with Rapid Data Integration for the Mod...Hortonworks
Many enterprises are turning to Apache Hadoop to enable Big Data Analytics and reduce the costs of traditional data warehousing. Yet, it is hard to succeed when 80% of the time is spent on moving data and only 20% on using it. It’s time to swap the 80/20! The Big Data experts at Attunity and Hortonworks have a solution for accelerating data movement into and out of Hadoop that enables faster time-to-value for Big Data projects and a more complete and trusted view of your business. Join us to learn how this solution can work for you.
In 2015/16 Worldpay deployed it's Enterprise Data Platform - a highly secure cluster used for analysis of over 65 Billion card transactions and the subject of last years Hadoop Summit Keynote in Dublin. A year on and we are now rapidly expanding our platform with true multi-tenancy. For our first tenant we have build and deployed the analytics and reporting for our central platforms. Our second tenant is to deploy 'decision engines' into our core business systems. These allow Worldpay to make decisions derived from machine learning on how we authorise and route payments traffic and how these affect the consumer, merchant and other business partners. We are also developing other tenant for systems management and security. This talk will look at what it means to have truly have a single enterprise data lake and multiple tenants that share that data and look forward to how we will extend the platform in 2017 with Hadoop 3.
Discover HDP 2.1: Apache Solr for Hadoop SearchHortonworks
Apache Solr is the open source platform for searching data stored in Hadoop. Solr powers search on many of the world's largest Internet sites, enabling powerful full-text search and near real-time indexing. Whether users search for tabular, text, geo-location or sensor data in Hadoop, they find it quickly with Apache Solr. Hortonworks Data Platform 2.1 includes Apache Solr.
In this deck from their 30-minute webinar, Rohit Bakhshi, Hortonworks product manager, and Paul Codding, Hortonworks solution engineer describe how Solr works within HDP's YARN-based architecture.
Hortonworks Data Platform 2.2 includes Apache HBase for fast NoSQL data access. In this 30-minute webinar, we discussed HBase innovations that are included in HDP 2.2, including: support for Apache Slider; Apache HBase high availability (HA); block ache compression; and wire-level encryption.
Hortonworks Data in Motion Webinar Series Part 7 Apache Kafka Nifi Better Tog...Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
https://www.brighttalk.com/webcast/9573/224063
Streamline Apache Hadoop Operations with Apache Ambari and SmartSenseHortonworks
Apache Ambari 2.5 helps customers simplify the experience for provisioning, managing, monitoring, securing and troubleshooting Hadoop deployments. Find out how the combination of Ambari and SmartSense delivers a path to success to help IT get Hadoop up and running effectively. The end result – you get the full business impact management and benefits of Big Data for your organization.
https://hortonworks.com/webinar/streamline-apache-hadoop-operations-apache-ambari-smartsense/
Starting Small and Scaling Big with Hadoop (Talend and Hortonworks webinar)) ...Hortonworks
No matter if you are new to Hadoop or have a mature cluster in production, scale will be a critical factor of your success with Hadoop. Are you ready to take the next big step as you scale out your data architecture?
Talend and Hortonworks discuss where we will help you learn how to implement an effective big data and Hadoop strategy across your IT infrastructure. You will learn:
How to grow a pilot into production
How to scale-out architecture & systems affordably
How to leverage the flexibility of Hadoop to optimize your data integration processes
Recording: http://www.talend.com/resources/webinars/starting-small-and-scaling-big-with-hadoop
Hortonworks Data In Motion Webinar Series Pt. 2Hortonworks
How Hortonworks DataFlow (HDF), powered by Apache NIFi, MiNiFi, Kafka and Storm, and it’s associated HDF Certification Program make it easier and faster to integrate different systems together. Highlights on the latest partner integrations from HPE, SAS, Attunity, Impetus Technologies, Kepware and Midfin Systems. “
Watch the webinar on-demand: http://hortonworks.com/webinar/make-big-data-ecosystem-work-better/
HDF Partner certification program: http://hortonworks.com/partners/product-integration-certification/#hdf-integration
A Comprehensive Approach to Building your Big Data - with Cisco, Hortonworks ...Hortonworks
Companies in every industry look for ways to explore new data types and large data sets that were previously too big to capture, store and process. They need to unlock insights from data such as clickstream, geo-location, sensor, server log, social, text and video data. However, becoming a data-first enterprise comes with many challenges.
Join this webinar organized by three leaders in their respective fields and learn from our experts how you can accelerate the implementation of a scalable, cost-efficient and robust Big Data solution. Cisco, Hortonworks and Red Hat will explore how new data sets can enrich existing analytic applications with new perspectives and insights and how they can help you drive the creation of innovative new apps that provide new value to your business.
Hortonworks Protegrity Webinar: Leverage Security in Hadoop Without Sacrifici...Hortonworks
As more data is imported into Hadoop Data Lakes, how can we best secure sensitive data? Recording is at: https://www.brighttalk.com/webcast/9573/171957
What security options are available and what kind of best practices should be implemented? Join our two speakers as they discuss securing HDP data lakes to leverage security in Hadoop without sacrificing usability. Presenters: Vincent Lam, Protegrity - Syed Mahmood, Hortonworks.
You’ll learn about:
· The 5 Pillars of Security for Hadoop
· Open Source HDP Security
· How Hortonworks leverages Protegrity to jointly offer the most robust Hadoop protection available
· The benefits and differences of data protection including tokenization, encryption, and masking
· Leveraging consistent security across Hadoop and beyond for protection of data across its lifecycle
The world’s largest enterprises run their infrastructure on Oracle, DB2 and SQL and their critical business operations on SAP applications. Organisations need this data to be available in real-time to conduct necessary analytics. However, delivering this heterogeneous data at the speed it’s required can be a huge challenge because of the complex underlying data models and structures and legacy manual processes which are prone to errors and delays.
Unlock these silos of data and enable the new advanced analytics platforms by attending this session.
Find out how to:
• To overcome common challenges faced by enterprises trying to access their SAP data
• You can integrate SAP data in real-time with change data capture (CDC) technology
• Organisations are using Attunity Replicate for SAP to stream SAP data in to Kafka
Powering Fast Data and the Hadoop Ecosystem with VoltDB and HortonworksHortonworks
Developers increasingly are building dynamic, interactive real-time applications on fast streaming data to extract maximum value from data in the moment. To do so requires a data pipeline, the ability to make transactional decisions against state, and an export functionality that pushes data at high speeds to long-term Hadoop analytics stores like Hortonworks Data Platform (HDP). This enables data to arrive in your analytic store sooner, and allows these analytics to be leveraged with radically lower latency.
But successfully writing fast data applications that manage, process, and export streams of data generated from mobile, smart devices, sensors and social interactions is a big challenge.
Join Hortonworks and VoltDB, an in-memory scale-out relational database that simplifies fast data application development, to learn how you can ingest large volumes of fast-moving, streaming data and process it in real time. We will also cover how developing fast data applications is simplified, faster - and delivers more value when built on a fast in-memory, scale-out SQL database.
Discover HDP 2.2: Even Faster SQL Queries with Apache Hive and Stinger.nextHortonworks
Earlier this year, the Apache open source community delivered the Stinger Initiative to improve speed, scale and SQL semantics in Apache Hive. Now Stinger.next is underway, to build on those initial successes.
In this presentation, from a webinar hosted by Hortonworks co-founder Alan Gates and Hortonworks Hive product manager Raj Baines, you can learn more about Stinger.next and innovation in Apache Hive.
Alan and Raj cover new Hive functionality for more speed, scale and SQL in HDP 2.2. Specific topics include transactions with ACID semantics, the cost based optimizer and dynamic query optimizations.
The presentation also shows future plans for the Stinger.next initiative.
Delivering a Flexible IT Infrastructure for Analytics on IBM Power SystemsHortonworks
Customers are preparing themselves to analyze and manage an increasing quantity of structured and unstructured data. Business leaders introduce new analytical workloads faster than what IT departments can handle. Legacy IT infrastructure needs to evolve to deliver operational improvements and cost containment, while increasing flexibility to meet future requirements. By providing HDP on IBM Power Systems, Hortonworks and IBM are giving customers have more choice in selecting the appropriate architectural platform that is right for them. In this webinar, we’ll discuss some of the challenges with deploying big data platforms, and how choosing solutions built with HDP on IBM Power Systems can offer tangible benefits and flexibility to accommodate changing needs.
Rescue your Big Data from Downtime with HP Operations Bridge and Apache HadoopHortonworks
How can you simplify the management and monitoring of your Hadoop environment? Ensure IT can focus on the right business priorities supported by Hadoop? Take a look at this presentation and learn how you can simplify the management and monitoring of your Hadoop environment, and ensure IT can focus on the right business priorities supported by Hadoop.
The newly enacted GDPR regulations which become effective in 2018 require comprehensive protection of personal information of EU subjects. In this paper, we outline a solution that discovers and classifies personal data that is subject to GDPR in Hadoop ecosystem and uses such precise classification to automatically create a robust set of policies for authorization. The solution consists of using Dataguise’s DgSecure sensitive data detection to automatically classify sensitive data assets in Apache Atlas and author comprehensive and robust authorization policies via Apache Ranger. DgSecure is used to detect sensitive data in Hive databases and continuously update the classification in Apache Atlas via tags. Apache Atlas tags are used to create Apache Ranger policies that protect access to sensitive HDFS files, Hive tables, and Hive columns. We demonstrate a workflow where the components of the solution are automated requiring little or no manual intervention to provide protection of such sensitive data in Hadoop clusters.
Optimizing your Modern Data Architecture - with Attunity, RCG Global Services...Hortonworks
Today’s enterprises are challenged with capturing large amounts of data from a number of sources in a variety of formats, and then storing it in a cost-effective, timely manner. With your current data warehouse, this may seem overwhelming. It doesn’t have to be. With a Hadoop-based modern data warehouse, you can overcome these challenges and get meaningful insights from real-time data.
Want to learn how? Join experts from Attunity, Hortonworks, and RCG Global Services for a live webinar - where we will be discussing enterprise data warehouse optimization. You will learn how to:
•Rebalance your data warehouse by identifying unused data and resource-intensive workloads that can be moved to Hadoop.
•Seamlessly integrate your current enterprise data warehouse with a Modern Data Architecture.
•Better utilize data assets to reduce costs while realizing more value from your data.
•Develop a roadmap for implementing the Hadoop-based Modern Data Architecture and Data Lake.
Webinar - Accelerating Hadoop Success with Rapid Data Integration for the Mod...Hortonworks
Many enterprises are turning to Apache Hadoop to enable Big Data Analytics and reduce the costs of traditional data warehousing. Yet, it is hard to succeed when 80% of the time is spent on moving data and only 20% on using it. It’s time to swap the 80/20! The Big Data experts at Attunity and Hortonworks have a solution for accelerating data movement into and out of Hadoop that enables faster time-to-value for Big Data projects and a more complete and trusted view of your business. Join us to learn how this solution can work for you.
In 2015/16 Worldpay deployed it's Enterprise Data Platform - a highly secure cluster used for analysis of over 65 Billion card transactions and the subject of last years Hadoop Summit Keynote in Dublin. A year on and we are now rapidly expanding our platform with true multi-tenancy. For our first tenant we have build and deployed the analytics and reporting for our central platforms. Our second tenant is to deploy 'decision engines' into our core business systems. These allow Worldpay to make decisions derived from machine learning on how we authorise and route payments traffic and how these affect the consumer, merchant and other business partners. We are also developing other tenant for systems management and security. This talk will look at what it means to have truly have a single enterprise data lake and multiple tenants that share that data and look forward to how we will extend the platform in 2017 with Hadoop 3.
Discover HDP 2.1: Apache Solr for Hadoop SearchHortonworks
Apache Solr is the open source platform for searching data stored in Hadoop. Solr powers search on many of the world's largest Internet sites, enabling powerful full-text search and near real-time indexing. Whether users search for tabular, text, geo-location or sensor data in Hadoop, they find it quickly with Apache Solr. Hortonworks Data Platform 2.1 includes Apache Solr.
In this deck from their 30-minute webinar, Rohit Bakhshi, Hortonworks product manager, and Paul Codding, Hortonworks solution engineer describe how Solr works within HDP's YARN-based architecture.
Hortonworks Data Platform 2.2 includes Apache HBase for fast NoSQL data access. In this 30-minute webinar, we discussed HBase innovations that are included in HDP 2.2, including: support for Apache Slider; Apache HBase high availability (HA); block ache compression; and wire-level encryption.
Hortonworks Data in Motion Webinar Series Part 7 Apache Kafka Nifi Better Tog...Hortonworks
Apache NiFi, Storm and Kafka augment each other in modern enterprise architectures. NiFi provides a coding free solution to get many different formats and protocols in and out of Kafka and compliments Kafka with full audit trails and interactive command and control. Storm compliments NiFi with the capability to handle complex event processing.
Join us to learn how Apache NiFi, Storm and Kafka can augment each other for creating a new dataplane connecting multiple systems within your enterprise with ease, speed and increased productivity.
https://www.brighttalk.com/webcast/9573/224063
Streamline Apache Hadoop Operations with Apache Ambari and SmartSenseHortonworks
Apache Ambari 2.5 helps customers simplify the experience for provisioning, managing, monitoring, securing and troubleshooting Hadoop deployments. Find out how the combination of Ambari and SmartSense delivers a path to success to help IT get Hadoop up and running effectively. The end result – you get the full business impact management and benefits of Big Data for your organization.
https://hortonworks.com/webinar/streamline-apache-hadoop-operations-apache-ambari-smartsense/
MiNiFi is a recently started sub-project of Apache NiFi that is a complementary data collection approach which supplements the core tenets of NiFi in dataflow management, focusing on the collection of data at the source of its creation. Simply, MiNiFi agents take the guiding principles of NiFi and pushes them to the edge in a purpose built design and deploy manner. This talk will focus on MiNiFi's features, go over recent developments and prospective plans, and give a live demo of MiNiFi.
The config.yml is available here: https://gist.github.com/JPercivall/f337b8abdc9019cab5ff06cb7f6ff09a
Analytics, Business Intelligence, and Data Science - What's the Progression?DATAVERSITY
Data analysis can include looking back at historical data, understanding what an organization currently has, and even looking forward to predictions of the future. This presentation will talk about the differences between analytics, business intelligence, and data science, as well as the differences in architecture — and possibly even organization maturity — that make each successful.
Learn more about these topics we will explore including:
Defining analytics, business intelligence, and data science
Differences in architecture
When to use analytics, business intelligence, or data science
Whether there has been an evolution between analytics, business intelligence, and data science
With a background in branding, communication and product design, Flyte creates products that captivate the imagination, combining classic design with innovative technology.
Revolution Analytics - Presentation at Hortonworks Booth - Strata 2014Hortonworks
Join Revolution Analytics and Hortonworks during this interactive presentation to discuss how customers are using Hadoop and R in the real world. We’ll show an end-to-end customer churn analytics demonstration (leveraging Revolution Analytics, Hortonworks and Tableau) serving three user personas: a website visitor, a data scientist and a business analyst.
Enterprise Data Science at Scale Meetup - IBM and Hortonworks - Oct 2017 Hortonworks
View the recording of the meet up, including the live demos, here: https://www.youtube.com/watch?v=uaJWB3K8lkg
Data science holds tremendous potential for organizations to uncover new insights and drivers of revenue and profitability. Big Data has brought the promise of doing data science at scale to enterprises, however this promise also comes with challenges for data scientists to continuously learn and collaborate. Data Scientists have many tools at their disposal such as notebooks like Juypter and Apache Zeppelin & IDEs such as RStudio with languages like R, Python, Scala and frameworks like Apache Spark. Given all the choices how do you best collaborate to build your model and then work through the development lifecycle to deploy it from test into production?
Why Data Science on Big Data?
In this meetup you will cover the attributes of a modern data science platform that empowers data scientists to build models using all the data in their data lake and foster continuous learning and collaboration. We will show a demo of Apache Zeppelin, Apache Spark, Apache Livy and Apache Hadoop with the focus on integration, security and model deployment and management.
Data Science at Scale DEMO
The demo will cover the Data Science life cycle: develop model in team environment, train the model with all the data on a Hadoop cluster, deploy model into production. The model will be a Spark ML model
Practical ML with Apache Spark
To deliver machine learning solutions data scientists not only need to fit models but also do familiar tasks data collection & wrangling, labelling, feature extraction and transformation, model tuning and evaluation, etc. Apache Spark provide provides a unified solution for all this under the same framework.
For example, one can use Spark SQL to generate training data from different sources and then pass it directly to MLlib for feature engineering and model tuning, instead of using Hive/Pig for the first half and then downloading the data to a single machine to train models in R. The latter is actually very common in practice but painful to maintain. Spark MLlib makes life easier for data scientists and machine learning engineers so that they can focus on building better ML models and applications.
We will discuss the underlying principles required to develop practical machine learning and data science pipelines and show some hands-on experience using Apache Spark to solve typical machine learning and data science problem. We will also have a short discussion about how Spark MLlib faces challenges from other machine learning libraries such as TensorFlow and XGBoost.
Benefits of Transferring Real-Time Data to Hadoop at ScaleHortonworks
Today’s Big Data teams demand solutions designed for Big Data that are optimized, secure, and adaptable to changing workload requirements. Working together, Hortonworks, IBM, and Attunity have designed an integrated solution that transfers large volumes of data to a platform that can handle rapid ingest, processing and analysis of data of all types from all sources, at scale.
https://hortonworks.com/webinar/benefits-transferring-real-time-data-hadoop-scale-ibm-hortonworks-attunity/
The 2017 Accenture Technology Vision report showcases the top five disruptive IT trends and innovations shaping the business landscape in 2017 and beyond. Take action today and shape technology to fit your needs.
Learn more at www.accenture.com/technologyvision
Verizon Centralizes Data into a Data Lake in Real Time for AnalyticsDataWorks Summit
Verizon – Global Technology Services (GTS) was challenged by a multi-tier, labor-intensive process when trying to migrate data from disparate sources into a data lake to create financial reports and business insights. Join this session to learn more about how Verizon:
• Easily accessed data from multiple sources including SAP data
• Ingested data into major targets including Hadoop
• Achieved real-time insights from data leveraging change data capture (CDC) technology
• Reduced costs and labor
View the recording:
http://hortonworks.com/webinar/accelerating-real-time-data-ingest-hadoop/
Hadoop didn’t disrupt the data center. The exploding amounts of data did. But, let’s face it, if you can’t move your data to Hadoop, then you can’t use it in Hadoop. The experts from Hortonworks, the #1 leader in Hadoop development, and Attunity, a leading data management software provider, cover:
- How to ingest your most valuable data into Hadoop using Attunity Replicate
- About how customers are using Hortonworks DataFlow (HDF) powered by Apache NiFi
- How to combine the real-time change data capture (CDC) technology with connected data platforms from Hortonworks
We discuss how Attunity Replicate and Hortonworks Data Flow (HDF) work together to move data into Hadoop.
Learn how when an organizations combine HP and Vertica Analytics Platform and Hortonworks, they can quickly explore and analyze broad variety of data types to transform to actionable information that allows them to better understand how their customers and site visitors interact with their business, offline and online.
Predicting Customer Experience through Hadoop and Customer Behavior GraphsHortonworks
Enhancing a customer experience has become essential for communication service providers to effectively manage customer churn and build a strong, long lasting relationship with their customers. This has become increasingly challenging as customer interactions occur across multiple channels. Understanding customer behavior and how it applies across channels is the key to ensuring the best level of experience is achieved by each customer.
In this webinar Hortonworks and Apigee discuss how service providers can capture and visualize customer behavior across customer interaction points like call center events (IVR and chat) and combine it with network data, to predict customer calls and patterns of digital channel abandonment using Hadoop and predictive analysis and visualization tools..
We will identify ways to develop a 360 degree view across a customer’s household through an HDP Data Lake and visualize customer interaction patterns and predict expected behavior using Apigee Insights to identify and initiate the Next-Best-Action for a customer to ensure a superior level of customer experience.
SendGrid Improves Email Delivery with Hybrid Data WarehousingAmazon Web Services
When you received your Uber ‘Tuesday Evening Ride Receipt’ or Spotify’s ‘This Week’s New Music’ email, did you think about how they got there?
SendGrid’s reliable email platform delivers each month over 20 Billion transactional and marketing emails on behalf of many of your favorite brands, including Uber, Airbnb, Spotify, Foursquare and NextDoor.
SendGrid was looking to evolve its data warehouse architecture in order to improve decision making and optimize customer experience. They needed a scalable and reliable architecture that would allow them to move nimbly and efficiently with a relatively small IT organization, while supporting the needs of both business and technical users at SendGrid.
SendGrid’s Director of Enterprise Data Operations will be joining architects from Amazon Web Services (AWS) and Informatica to discuss SendGrid’s journey to a hybrid cloud architecture and how a hybrid data warehousing solution is optimized to support SendGrid’s analytics initiative. Speakers will also review common technologies and use cases being deployed in hybrid cloud today, common data management challenges in hybrid cloud and best practices for addressing these challenges.
Join us to learn:
• How to evolve to a hybrid data warehouse with Amazon Redshift for scalability, agility and cost efficiency with minimal IT resources
• Hybrid cloud data management use cases
• Best practices for addressing hybrid cloud data management challenges
Hortonworks and Red Hat Webinar_Sept.3rd_Part 1Hortonworks
As the enterprise's big data program matures and Apache Hadoop becomes more deeply embedded in critical operations, the ability to support and operate it efficiently and reliably becomes increasingly important. To aid enterprise in operating modern data architecture at scale, Red hat and Hortonworks have collaborated to integrate Hortonworks Data Platform with Red Hat's proven platform technologies. Join us in this interactive 3-part webinar series, as we'll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data.
Eliminating the Challenges of Big Data Management Inside HadoopHortonworks
Your Big Data strategy is only as good as the quality of your data. Today, deriving business value from data depends on how well your company can capture, cleanse, integrate and manage data. During this webinar, we discuss how to eliminate the challenges to Big Data management inside Hadoop.
Eliminating the Challenges of Big Data Management Inside HadoopHortonworks
Your Big Data strategy is only as good as the quality of your data. Today, deriving business value from data depends on how well your company can capture, cleanse, integrate and manage data. During this webinar, we discussed how to eliminate the challenges to Big Data management inside Hadoop.
Go over these slides to learn:
· How to use the scalability and flexibility of Hadoop to drive faster access to usable information across the enterprise.
· Why a pure-YARN implementation for data integration, quality and management delivers competitive advantage.
· How to use the flexibility of RedPoint and Hortonworks to create an enterprise data lake where data is captured, cleansed, linked and structured in a consistent way.
Leverage Big Data to Enhance Customer Experience in Telecommunications – with...Hortonworks
Take a look at this presentation from Hortonworks and Skytree and learn how Communications Service Providers can enhance their customers experience by:
– Creating a Data Lake for a 360 degree customer view.
– Building dynamic customer profiles.
– Leveraging a next-best-action streaming engine.
You will learn more about how Hortonworks Hadoop Distribution Platform and Skytree Machine Learning Solution can help you do so.
Speakers: Dr. Alexander Gray, CTO at Skytree, and Sanjay Kumar, General Manager, Hortonworks
Apache Hadoop and its role in Big Data architecture - Himanshu Barijaxconf
In today’s world of exponentially growing big data, enterprises are becoming increasingly more aware of the business utility and necessity of harnessing, storing and analyzing this information. Apache Hadoop has rapidly evolved to become a leading platform for managing and processing big data, with the vital management, monitoring, metadata and integration services required by organizations to glean maximum business value and intelligence from their burgeoning amounts of information on customers, web trends, products and competitive markets. In this session, Hortonworks' Himanshu Bari will discuss the opportunities for deriving business value from big data by looking at how organizations utilize Hadoop to store, transform and refine large volumes of this multi-structured information. Connolly will also discuss the evolution of Apache Hadoop and where it is headed, the component requirements of a Hadoop-powered platform, as well as solution architectures that allow for Hadoop integration with existing data discovery and data warehouse platforms. In addition, he will look at real-world use cases where Hadoop has helped to produce more business value, augment productivity or identify new and potentially lucrative opportunities.
Accelerate Self-Service Analytics with Data Virtualization and VisualizationDenodo
Watch full webinar here: https://bit.ly/39AhUB7
Enterprise organizations are shifting to self-service analytics as business users need real-time access to holistic and consistent views of data regardless of its location, source or type for arriving at critical decisions.
Data Virtualization and Data Visualization work together through a universal semantic layer. Learn how they enable self-service data discovery and improve performance of your reports and dashboards.
In this session, you will learn:
- Challenges faced by business users
- How data virtualization enables self-service analytics
- Use case and lessons from customer success
- Overview of the highlight features in Tableau
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next LevelHortonworks
The HDF 3.3 release delivers several exciting enhancements and new features. But, the most noteworthy of them is the addition of support for Kafka 2.0 and Kafka Streams.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-3-taking-stream-processing-next-level/
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT StrategyHortonworks
Forrester forecasts* that direct spending on the Internet of Things (IoT) will exceed $400 Billion by 2023. From manufacturing and utilities, to oil & gas and transportation, IoT improves visibility, reduces downtime, and creates opportunities for entirely new business models.
But successful IoT implementations require far more than simply connecting sensors to a network. The data generated by these devices must be collected, aggregated, cleaned, processed, interpreted, understood, and used. Data-driven decisions and actions must be taken, without which an IoT implementation is bound to fail.
https://hortonworks.com/webinar/iot-predictions-2019-beyond-data-heart-iot-strategy/
Getting the Most Out of Your Data in the Cloud with CloudbreakHortonworks
Cloudbreak, a part of Hortonworks Data Platform (HDP), simplifies the provisioning and cluster management within any cloud environment to help your business toward its path to a hybrid cloud architecture.
https://hortonworks.com/webinar/getting-data-cloud-cloudbreak-live-demo/
Johns Hopkins - Using Hadoop to Secure Access Log EventsHortonworks
In this webinar, we talk with experts from Johns Hopkins as they share techniques and lessons learned in real-world Apache Hadoop implementation.
https://hortonworks.com/webinar/johns-hopkins-using-hadoop-securely-access-log-events/
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad GuysHortonworks
Cybersecurity today is a big data problem. There’s a ton of data landing on you faster than you can load, let alone search it. In order to make sense of it, we need to act on data-in-motion, use both machine learning, and the most advanced pattern recognition system on the planet: your SOC analysts. Advanced visualization makes your analysts more efficient, helps them find the hidden gems, or bombs in masses of logs and packets.
https://hortonworks.com/webinar/catch-hacker-real-time-live-visuals-bots-bad-guys/
We have introduced several new features as well as delivered some significant updates to keep the platform tightly integrated and compatible with HDP 3.0.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-2-release-raises-bar-operational-efficiency/
Curing Kafka Blindness with Hortonworks Streams Messaging ManagerHortonworks
With the growth of Apache Kafka adoption in all major streaming initiatives across large organizations, the operational and visibility challenges associated with Kafka are on the rise as well. Kafka users want better visibility in understanding what is going on in the clusters as well as within the stream flows across producers, topics, brokers, and consumers.
With no tools in the market that readily address the challenges of the Kafka Ops teams, the development teams, and the security/governance teams, Hortonworks Streams Messaging Manager is a game-changer.
https://hortonworks.com/webinar/curing-kafka-blindness-hortonworks-streams-messaging-manager/
Interpretation Tool for Genomic Sequencing Data in Clinical EnvironmentsHortonworks
The healthcare industry—with its huge volumes of big data—is ripe for the application of analytics and machine learning. In this webinar, Hortonworks and Quanam present a tool that uses machine learning and natural language processing in the clinical classification of genomic variants to help identify mutations and determine clinical significance.
Watch the webinar: https://hortonworks.com/webinar/interpretation-tool-genomic-sequencing-data-clinical-environments/
IBM+Hortonworks = Transformation of the Big Data LandscapeHortonworks
Last year IBM and Hortonworks jointly announced a strategic and deep partnership. Join us as we take a close look at the partnership accomplishments and the conjoined road ahead with industry-leading analytics offers.
View the webinar here: https://hortonworks.com/webinar/ibmhortonworks-transformation-big-data-landscape/
In this exclusive Premier Inside Out, you will hear from Druid committer Slim Bouguerra, Staff Software Engineer and Product Manager Will Xu. These Hortonworkers will explain the vision of these components, review new features, share some best practices and answer your questions.
View the webinar here: https://hortonworks.com/webinar/hortonworks-premier-apache-druid/
Accelerating Data Science and Real Time Analytics at ScaleHortonworks
Gaining business advantages from big data is moving beyond just the efficient storage and deep analytics on diverse data sources to using AI methods and analytics on streaming data to catch insights and take action at the edge of the network.
https://hortonworks.com/webinar/accelerating-data-science-real-time-analytics-scale/
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATAHortonworks
Thanks to sensors and the Internet of Things, industrial processes now generate a sea of data. But are you plumbing its depths to find the insight it contains, or are you just drowning in it? Now, Hortonworks and Seeq team to bring advanced analytics and machine learning to time-series data from manufacturing and industrial processes.
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...Hortonworks
Trimble Transportation Enterprise is a leading provider of enterprise software to over 2,000 transportation and logistics companies. They have designed an architecture that leverages Hortonworks Big Data solutions and Machine Learning models to power up multiple Blockchains, which improves operational efficiency, cuts down costs and enables building strategic partnerships.
https://hortonworks.com/webinar/blockchain-with-machine-learning-powered-by-big-data-trimble-transportation-enterprise/
Delivering Real-Time Streaming Data for Healthcare Customers: ClearsenseHortonworks
For years, the healthcare industry has had problems of data scarcity and latency. Clearsense solved the problem by building an open-source Hortonworks Data Platform (HDP) solution while providing decades worth of clinical expertise. Clearsense is delivering smart, real-time streaming data, to its healthcare customers enabling mission-critical data to feed clinical decisions.
https://hortonworks.com/webinar/delivering-smart-real-time-streaming-data-healthcare-customers-clearsense/
Making Enterprise Big Data Small with EaseHortonworks
Every division in an organization builds its own database to keep track of its business. When the organization becomes big, those individual databases grow as well. The data from each database may become silo-ed and have no idea about the data in the other database.
https://hortonworks.com/webinar/making-enterprise-big-data-small-ease/
Driving Digital Transformation Through Global Data ManagementHortonworks
Using your data smarter and faster than your peers could be the difference between dominating your market and merely surviving. Organizations are investing in IoT, big data, and data science to drive better customer experience and create new products, yet these projects often stall in ideation phase to a lack of global data management processes and technologies. Your new data architecture may be taking shape around you, but your goal of globally managing, governing, and securing your data across a hybrid, multi-cloud landscape can remain elusive. Learn how industry leaders are developing their global data management strategy to drive innovation and ROI.
Presented at Gartner Data and Analytics Summit
Speaker:
Dinesh Chandrasekhar
Director of Product Marketing, Hortonworks
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming FeaturesHortonworks
Hortonworks DataFlow (HDF) is the complete solution that addresses the most complex streaming architectures of today’s enterprises. More than 20 billion IoT devices are active on the planet today and thousands of use cases across IIOT, Healthcare and Manufacturing warrant capturing data-in-motion and delivering actionable intelligence right NOW. “Data decay” happens in a matter of seconds in today’s digital enterprises.
To meet all the needs of such fast-moving businesses, we have made significant enhancements and new streaming features in HDF 3.1.
https://hortonworks.com/webinar/series-hdf-3-1-technical-deep-dive-new-streaming-features/
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...Hortonworks
Join the Hortonworks product team as they introduce HDF 3.1 and the core components for a modern data architecture to support stream processing and analytics.
You will learn about the three main themes that HDF addresses:
Developer productivity
Operational efficiency
Platform interoperability
https://hortonworks.com/webinar/series-hdf-3-1-redefining-data-motion-modern-data-architectures/
Unlock Value from Big Data with Apache NiFi and Streaming CDCHortonworks
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. It’s being used across industries on large amounts of data that had stored in isolation which made collaboration and analysis difficult.
Join industry experts from Hortonworks and Attunity as they explain how Apache NiFi and streaming CDC technology provides a distributed, resilient platform for unlocking the value of data in new ways.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
TALK TRACK
Hortonworks is powering the future of data.
Whether from data at rest or data in motion, we help our customers tap into all the data.
We give the world’s leading companies and government agencies actionable intelligence to do things that were never before possible.
[NEXT SLIDE]
Hadoop Distribution with Yarn: Allows central source of data across all mediums of ingestion and interaction
Existing & Legacy Systems can Contribute and Participate: May extend the life of existing and legacy systems from enriched data
New Applications interact with Data Lake, not each other: Next Generation Apps build around data and can deliver to customers and partners
Sentiment from Social Media
How happy are customers with carrier?
Churn Score if another member of social network leaves carrier?
Customer Churn Score from Call Center Voice-2-text & voice tone:
How happy are customers with carrier?
What events have occurred that have introduced risk of customer churn?
CDR with Social Media
Bad QOS that has customer not happy with carrier?; Where to prioritized repairs & care service to customer to reduce chances of Churn?
CDR for location with Social Media
Where was customer when bad experience occurred?
Enrich the Data Lake with all available customer interaction points
Develop and Revise the Models that best get your customer metrics:
Net Promoter Score, Customer Churn Score, Appetite for Information, Customer Target Profile, etc
Deploy the Current State of the Customer with references and customer real-time metrics
Computed (ML) from the Data Lake into a Dynamic Customer Profile that is available across business groups and systems
What the industry cares about:
Hadoop has moved out of test
Enterprise Use Case
Closer to Production
Business impact
Enterprise + Real-time VS Sqoop + Batch
Attunity + Replicate
High performance connectivity to Hadoop though native APIs for data ingest and publication
Automated schema generation in Hcatalog
Drag & drop configuration with Click-2-Replicate design
High-speed data load options:
Full reload with overwrite
Insert only appends
Change Data Capture(CDC)
In-memory data filtering and transformation
Monitoring dashboard with web-based metrics, alerts and log file management
One of the reasons several large technology companies trust and rely on Attunity for their own solutions is because of the robust CDC capability that Replicate provides.
There are several options that are built into the product that provide flexible and optimized ways to implement change data capture.
In addition to applying transactions in real-time and in-order, Replicate can handle varying volumes of changes on the source systems by applying the changes in optimized batches to improve throughput and latency
In order to provide high-speed data loads into data warehouse appliances, Replicate is integrated with native data warehouse loaders for fast data ingestion into the target and then changes are merged in the target. It does not rely on sub optimal ODBC for data loading into the ware house systems.
And recently, Attunity added support to write changes in message encoded format that can be published to Kafka message brokers as well.