With the growth of Apache Kafka adoption in all major streaming initiatives across large organizations, the operational and visibility challenges associated with Kafka are on the rise as well. Kafka users want better visibility in understanding what is going on in the clusters as well as within the stream flows across producers, topics, brokers, and consumers.
With no tools in the market that readily address the challenges of the Kafka Ops teams, the development teams, and the security/governance teams, Hortonworks Streams Messaging Manager is a game-changer.
https://hortonworks.com/webinar/curing-kafka-blindness-hortonworks-streams-messaging-manager/
Delivering Real-Time Streaming Data for Healthcare Customers: ClearsenseHortonworks
For years, the healthcare industry has had problems of data scarcity and latency. Clearsense solved the problem by building an open-source Hortonworks Data Platform (HDP) solution while providing decades worth of clinical expertise. Clearsense is delivering smart, real-time streaming data, to its healthcare customers enabling mission-critical data to feed clinical decisions.
https://hortonworks.com/webinar/delivering-smart-real-time-streaming-data-healthcare-customers-clearsense/
We have introduced several new features as well as delivered some significant updates to keep the platform tightly integrated and compatible with HDP 3.0.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-2-release-raises-bar-operational-efficiency/
Johns Hopkins - Using Hadoop to Secure Access Log EventsHortonworks
In this webinar, we talk with experts from Johns Hopkins as they share techniques and lessons learned in real-world Apache Hadoop implementation.
https://hortonworks.com/webinar/johns-hopkins-using-hadoop-securely-access-log-events/
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad GuysHortonworks
Cybersecurity today is a big data problem. There’s a ton of data landing on you faster than you can load, let alone search it. In order to make sense of it, we need to act on data-in-motion, use both machine learning, and the most advanced pattern recognition system on the planet: your SOC analysts. Advanced visualization makes your analysts more efficient, helps them find the hidden gems, or bombs in masses of logs and packets.
https://hortonworks.com/webinar/catch-hacker-real-time-live-visuals-bots-bad-guys/
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...Hortonworks
Trimble Transportation Enterprise is a leading provider of enterprise software to over 2,000 transportation and logistics companies. They have designed an architecture that leverages Hortonworks Big Data solutions and Machine Learning models to power up multiple Blockchains, which improves operational efficiency, cuts down costs and enables building strategic partnerships.
https://hortonworks.com/webinar/blockchain-with-machine-learning-powered-by-big-data-trimble-transportation-enterprise/
IBM+Hortonworks = Transformation of the Big Data LandscapeHortonworks
Last year IBM and Hortonworks jointly announced a strategic and deep partnership. Join us as we take a close look at the partnership accomplishments and the conjoined road ahead with industry-leading analytics offers.
View the webinar here: https://hortonworks.com/webinar/ibmhortonworks-transformation-big-data-landscape/
Delivering Real-Time Streaming Data for Healthcare Customers: ClearsenseHortonworks
For years, the healthcare industry has had problems of data scarcity and latency. Clearsense solved the problem by building an open-source Hortonworks Data Platform (HDP) solution while providing decades worth of clinical expertise. Clearsense is delivering smart, real-time streaming data, to its healthcare customers enabling mission-critical data to feed clinical decisions.
https://hortonworks.com/webinar/delivering-smart-real-time-streaming-data-healthcare-customers-clearsense/
We have introduced several new features as well as delivered some significant updates to keep the platform tightly integrated and compatible with HDP 3.0.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-2-release-raises-bar-operational-efficiency/
Johns Hopkins - Using Hadoop to Secure Access Log EventsHortonworks
In this webinar, we talk with experts from Johns Hopkins as they share techniques and lessons learned in real-world Apache Hadoop implementation.
https://hortonworks.com/webinar/johns-hopkins-using-hadoop-securely-access-log-events/
Catch a Hacker in Real-Time: Live Visuals of Bots and Bad GuysHortonworks
Cybersecurity today is a big data problem. There’s a ton of data landing on you faster than you can load, let alone search it. In order to make sense of it, we need to act on data-in-motion, use both machine learning, and the most advanced pattern recognition system on the planet: your SOC analysts. Advanced visualization makes your analysts more efficient, helps them find the hidden gems, or bombs in masses of logs and packets.
https://hortonworks.com/webinar/catch-hacker-real-time-live-visuals-bots-bad-guys/
Blockchain with Machine Learning Powered by Big Data: Trimble Transportation ...Hortonworks
Trimble Transportation Enterprise is a leading provider of enterprise software to over 2,000 transportation and logistics companies. They have designed an architecture that leverages Hortonworks Big Data solutions and Machine Learning models to power up multiple Blockchains, which improves operational efficiency, cuts down costs and enables building strategic partnerships.
https://hortonworks.com/webinar/blockchain-with-machine-learning-powered-by-big-data-trimble-transportation-enterprise/
IBM+Hortonworks = Transformation of the Big Data LandscapeHortonworks
Last year IBM and Hortonworks jointly announced a strategic and deep partnership. Join us as we take a close look at the partnership accomplishments and the conjoined road ahead with industry-leading analytics offers.
View the webinar here: https://hortonworks.com/webinar/ibmhortonworks-transformation-big-data-landscape/
Making Enterprise Big Data Small with EaseHortonworks
Every division in an organization builds its own database to keep track of its business. When the organization becomes big, those individual databases grow as well. The data from each database may become silo-ed and have no idea about the data in the other database.
https://hortonworks.com/webinar/making-enterprise-big-data-small-ease/
Data proliferation from 7+ billion humans and 20+ billion devices from every walk of life has been the focus in the last decade. With the velocity, variety and volume of data, every data organization’s goal shifted to protecting and monetizing data from rapidly growing network of IOT embedded objects and sensors.
One of the true and tried business continuity methodology of storing and retrieving vast amount of data has been through replication of Hadoop systems on hybrid clouds and in geographically distributed data centers. Replication is similar to Blockchain using autonomous smart contracts instantiated on the metadata and data so that the replicated data follows a single source of truth.
Replicas can be maintained across geographically distributed data centers giving greater risk tolerance capabilities to the businesses continuity plan for the data-sets. With intelligent predictive analytics based on usage patterns, dynamic tiering policies can be triggered on the data sets to provide true value-add to the data. The temperature of the data is used to move data between hot/warm/cold/archival storage based on configurable policies leading to greater reduction in total cost of ownership.
Users in 2018 and beyond demand absolute availability of data as and when they desire. The dynamic data access management is fundamental concept to satisfy the business continuity plan. Seamless enterprise-grade disaster recovery to support business continuity use case has significant challenges around replicating security and governance on data-sets. In this talk we will discuss how the above challenge can be addressed for supporting seamless replication and disaster recovery for Hadoop-scale data. NIRU ANISETI, Product Manager, Hortonworks
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATAHortonworks
Thanks to sensors and the Internet of Things, industrial processes now generate a sea of data. But are you plumbing its depths to find the insight it contains, or are you just drowning in it? Now, Hortonworks and Seeq team to bring advanced analytics and machine learning to time-series data from manufacturing and industrial processes.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Accelerating Data Science and Real Time Analytics at ScaleHortonworks
Gaining business advantages from big data is moving beyond just the efficient storage and deep analytics on diverse data sources to using AI methods and analytics on streaming data to catch insights and take action at the edge of the network.
https://hortonworks.com/webinar/accelerating-data-science-real-time-analytics-scale/
Driving Digital Transformation Through Global Data ManagementHortonworks
Using your data smarter and faster than your peers could be the difference between dominating your market and merely surviving. Organizations are investing in IoT, big data, and data science to drive better customer experience and create new products, yet these projects often stall in ideation phase to a lack of global data management processes and technologies. Your new data architecture may be taking shape around you, but your goal of globally managing, governing, and securing your data across a hybrid, multi-cloud landscape can remain elusive. Learn how industry leaders are developing their global data management strategy to drive innovation and ROI.
Presented at Gartner Data and Analytics Summit
Speaker:
Dinesh Chandrasekhar
Director of Product Marketing, Hortonworks
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming FeaturesHortonworks
Hortonworks DataFlow (HDF) is the complete solution that addresses the most complex streaming architectures of today’s enterprises. More than 20 billion IoT devices are active on the planet today and thousands of use cases across IIOT, Healthcare and Manufacturing warrant capturing data-in-motion and delivering actionable intelligence right NOW. “Data decay” happens in a matter of seconds in today’s digital enterprises.
To meet all the needs of such fast-moving businesses, we have made significant enhancements and new streaming features in HDF 3.1.
https://hortonworks.com/webinar/series-hdf-3-1-technical-deep-dive-new-streaming-features/
Risk listening: monitoring for profitable growthDataWorks Summit
Historically, insurers used 50-, 100-, and 500-year flood models for risk evaluation and pricing. The extreme weather events we have experienced in 2017 alone prove how dated these methods really are.
To better understand their customers and potential current/future liability claims, forward thinking insurers are monitoring, analyzing, and integrating external data sources in real time (weather feeds from USGS.gov, news and stock feeds, and satellite imagery, to name just a few). By integrating and injecting these new data sources into their risk models and underwriting, insurers are better able to identify their risk appetites and effectively price.
The session will include real-world case studies, including how a global P and C insurer is now quickly analyzing and monitoring 50,000 customers and targets, gaining new insights into the market. Another example is a global reinsurance and specialty company that now leverages digital news channels to monitor its risk portfolio for early warning claims indicators to help drive down loss costs. CINDY MAIKE, VP Industry Solutions, GM of Insurance, Hortonworks, Inc.
Big Traffic, Big Trouble: Big Data Security AnalyticsDataWorks Summit
With the rise of IoT and the increasing complexity of applications, clouds, networks and infrastructure, the battle to keep your data and your infrastructure safe from attackers is getting harder. As groups of bad actors collaborate, sharing information and offering illegal access, and botnets as a service, terabits of attack can be launched cheaply. Meanwhile, it’s hard to find enough security analysts to catch and prevent these attacks.
This is where community collaboration and open source efforts like Apache Metron come in. Metron presents a comprehensive framework for application and network, security built on Apache Hadoop and open source Streaming Analytics(ie Apache Nifi, Apache Kafka) tool’s highly scalable data management and processing stacks. Advanced features like profiling, machine learning, and visualization work with real-time streaming detection to make your SOC analysts more efficient, while the intrinsic extensibility of open source helps your data scientists get security insights out of the lab and into production fast.
We will discuss and demonstrate how some real-world businesses and managed service providers are using Apache Metron to identify and solve security threats at scale, and some approaches and ideas for how the platform can fit into your security architecture.
Enterprise Data Science at Scale Meetup - IBM and Hortonworks - Oct 2017 Hortonworks
View the recording of the meet up, including the live demos, here: https://www.youtube.com/watch?v=uaJWB3K8lkg
Data science holds tremendous potential for organizations to uncover new insights and drivers of revenue and profitability. Big Data has brought the promise of doing data science at scale to enterprises, however this promise also comes with challenges for data scientists to continuously learn and collaborate. Data Scientists have many tools at their disposal such as notebooks like Juypter and Apache Zeppelin & IDEs such as RStudio with languages like R, Python, Scala and frameworks like Apache Spark. Given all the choices how do you best collaborate to build your model and then work through the development lifecycle to deploy it from test into production?
Why Data Science on Big Data?
In this meetup you will cover the attributes of a modern data science platform that empowers data scientists to build models using all the data in their data lake and foster continuous learning and collaboration. We will show a demo of Apache Zeppelin, Apache Spark, Apache Livy and Apache Hadoop with the focus on integration, security and model deployment and management.
Data Science at Scale DEMO
The demo will cover the Data Science life cycle: develop model in team environment, train the model with all the data on a Hadoop cluster, deploy model into production. The model will be a Spark ML model
Practical ML with Apache Spark
To deliver machine learning solutions data scientists not only need to fit models but also do familiar tasks data collection & wrangling, labelling, feature extraction and transformation, model tuning and evaluation, etc. Apache Spark provide provides a unified solution for all this under the same framework.
For example, one can use Spark SQL to generate training data from different sources and then pass it directly to MLlib for feature engineering and model tuning, instead of using Hive/Pig for the first half and then downloading the data to a single machine to train models in R. The latter is actually very common in practice but painful to maintain. Spark MLlib makes life easier for data scientists and machine learning engineers so that they can focus on building better ML models and applications.
We will discuss the underlying principles required to develop practical machine learning and data science pipelines and show some hands-on experience using Apache Spark to solve typical machine learning and data science problem. We will also have a short discussion about how Spark MLlib faces challenges from other machine learning libraries such as TensorFlow and XGBoost.
Journey to Big Data: Main Issues, Solutions, BenefitsDataWorks Summit
One of the most fruit aspects of being chosen as a partner bank is that you can have a backend that can communicate directly with the client's system. With this partnership, Banco Santander has led to running a large series of third party applications on their banking system for many years.
Banking is the most regulated sector to make day-to-day operations more interesting. Adapting the system to regulation is not optional and is mandatory. For today's banks internal and external audits are an important routine. Furthermore, considering that SCIB is a global player, this pattern is repeated in each country where the group exists.
It can be said that it is a really interesting compound! Various kinds of third party systems are installed in many countries, coexisting with our centralized system, transmitting information mutually, being adjusted manually, data is aggregated / integrated at the back office. Spaghetti comes to mind when considering that all data comes and goes without delay. More and more, regulators and auditors are able to perfectly identify the origin of each data. This often means that you need to manually interfere in order to fully locate the data.
Javier Nieto, active in Banco Santander's corporate investment banking architecture and innovation department, talks about integration challenges that Santander experienced when building an on-demand Data Lake to move to global big data.
Every business is looking for a game-changer in data science, machine learning, and AI. Most organizations are also looking for ways to tap into open-source and commercial data science tools such as Python, RStudio, Apache Spark, Jupyter, and Zeppelin notebooks, to accelerate predictive and machine learning model building and deployment while leveraging the scale, security and governance of the Hortonworks Data Platform and other commercial platforms.
Ana Maria Echeverri will demonstrate how to accelerate data science, machine learning, and deep learning workflows by using IBM Watson Studio, an integrated environment for data scientists, application developers, and subject matter experts. This suite of tools allows to collaboratively connect to data, wrangle that data and use it to build, train and deploy models at scale while using Open Source skills (i.e.: Python) and expanding into cognitive capabilities through access to Watson APIs to build AI-powered applications. If you love Python and want to tap into the power of IBM Watson, this is the session for you.
Data Acquisition Automation for NiFi in a Hybrid Cloud environment – the Path...DataWorks Summit
Liberty Global is one of the world’s largest international TV and broadband company, operating in multiple European countries, and with tens of millions of TV, broadband internet, telephony and mobile subscribers.
The Data Solutions team's journey started last year with a strategic project that aimed to implement a state of the art Hybrid Cloud Big Data platform. In this talk, the Manager and the Platform Architect are presenting the team’s data acquisition journey which begins with implementing NiFi flows with simple Get-Put pattern and, in its the final iteration, produces a solution capable of generating complex flows automatically, leading the path to the DataOps way of working.
Unlock Value from Big Data with Apache NiFi and Streaming CDCHortonworks
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. It’s being used across industries on large amounts of data that had stored in isolation which made collaboration and analysis difficult.
Join industry experts from Hortonworks and Attunity as they explain how Apache NiFi and streaming CDC technology provides a distributed, resilient platform for unlocking the value of data in new ways.
Benefits of Transferring Real-Time Data to Hadoop at ScaleHortonworks
Today’s Big Data teams demand solutions designed for Big Data that are optimized, secure, and adaptable to changing workload requirements. Working together, Hortonworks, IBM, and Attunity have designed an integrated solution that transfers large volumes of data to a platform that can handle rapid ingest, processing and analysis of data of all types from all sources, at scale.
https://hortonworks.com/webinar/benefits-transferring-real-time-data-hadoop-scale-ibm-hortonworks-attunity/
With the explosive growth of IoT, the edge is predicted to grow to 25 billion connected devices by 2020. But, enterprises are still struggling to manage hundreds of devices that they have deployed. Not from a device management standpoint but more from a data management standpoint. Enterprises are unable to capture and process data directly from the edge devices for immediate analysis and gaining real-time actionable intelligence. So, if that is not possible, IoT initiatives are failing to become successful. How can an enterprise gather real-time data from edge devices? How can it change the behavior of such data collection processes? How can it ensure that data will be analyzed immediately? How can it understand the lineage of the data from edge to enterprise? How can it manage edge agents? What is an edge management hub? Attend this session to get a detailed understanding of key edge management challenges and how to address them with the correct solutions.
Data and analytics are at the heart of the digital transformation. Implementing a modern data platform can be challenging; moreover, success requires a shift in culture. Andreas will discuss the ways Munich Re drives cultural and technological change within their company, focusing on three key elements: people, processes, and technology. What does it mean to be a data-driven organization? How can we provide self-service analytics to our internal and external customers in an agile way? How do we get the most value out of our big data lake? How does Munich Re balance technology and culture to meet the data demands of their business?
Speaker
Andreas Kohlmaier, Head of Data Engineering, Munich Re
IIoT + Predictive Analytics: Solving for Disruption in Oil & Gas and Energy &...DataWorks Summit
The electric grid has evolved from linear generation and delivery to a complex mix of renewables, prosumer-generated electricity, and electric vehicles (EVs). Smart meters are generating loads of data. As a result, traditional forecasting models and technologies can no longer adequately predict supply and demand. Extreme weather, an aging infrastructure, and the burgeoning worldwide population are also contributing to increased outage frequency.
In oil and gas, commodity pricing pressures, resulting workforce reductions, and the need to reduce failures, automate workflows, and increase operational efficiencies are driving operators to shift analytics initiatives to advanced data-driven applications to complement physics-based tools.
While sensored equipment and legacy surveillance applications are generating massive amounts of data, just 2% is understood and being leveraged. Operationalizing it along with external datasets enables a shift from time-based to condition-based maintenance, better forecasting and dramatic reductions in unplanned downtime.
The session includes plenty of real-world anecdotes. For example, how an electric power holding company reduced the time it took to investigate energy theft from six months to less than one hour, producing theft leads in minutes and an expected multi-million dollar ROI. How a global offshore contract drilling services provider implemented an open source IIoT solution across its fleet of assets in less than a year, enabling remote monitoring, predictive analytics and maintenance.
Key takeaways:
• How are new processes for data collection, storage and democratization making it accessible and usable at scale?
• Beyond time series data, what other data types are important to assess?
• What advantage are open source technologies providing to enterprises deploying IIoT?
• Why is collaboration important across industrial verticals to increase IIoT open source adoption?
Speaker
Kenneth Smith, General Manager, Energy, Hortonworks
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next LevelHortonworks
The HDF 3.3 release delivers several exciting enhancements and new features. But, the most noteworthy of them is the addition of support for Kafka 2.0 and Kafka Streams.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-3-taking-stream-processing-next-level/
Making Enterprise Big Data Small with EaseHortonworks
Every division in an organization builds its own database to keep track of its business. When the organization becomes big, those individual databases grow as well. The data from each database may become silo-ed and have no idea about the data in the other database.
https://hortonworks.com/webinar/making-enterprise-big-data-small-ease/
Data proliferation from 7+ billion humans and 20+ billion devices from every walk of life has been the focus in the last decade. With the velocity, variety and volume of data, every data organization’s goal shifted to protecting and monetizing data from rapidly growing network of IOT embedded objects and sensors.
One of the true and tried business continuity methodology of storing and retrieving vast amount of data has been through replication of Hadoop systems on hybrid clouds and in geographically distributed data centers. Replication is similar to Blockchain using autonomous smart contracts instantiated on the metadata and data so that the replicated data follows a single source of truth.
Replicas can be maintained across geographically distributed data centers giving greater risk tolerance capabilities to the businesses continuity plan for the data-sets. With intelligent predictive analytics based on usage patterns, dynamic tiering policies can be triggered on the data sets to provide true value-add to the data. The temperature of the data is used to move data between hot/warm/cold/archival storage based on configurable policies leading to greater reduction in total cost of ownership.
Users in 2018 and beyond demand absolute availability of data as and when they desire. The dynamic data access management is fundamental concept to satisfy the business continuity plan. Seamless enterprise-grade disaster recovery to support business continuity use case has significant challenges around replicating security and governance on data-sets. In this talk we will discuss how the above challenge can be addressed for supporting seamless replication and disaster recovery for Hadoop-scale data. NIRU ANISETI, Product Manager, Hortonworks
TIME SERIES: APPLYING ADVANCED ANALYTICS TO INDUSTRIAL PROCESS DATAHortonworks
Thanks to sensors and the Internet of Things, industrial processes now generate a sea of data. But are you plumbing its depths to find the insight it contains, or are you just drowning in it? Now, Hortonworks and Seeq team to bring advanced analytics and machine learning to time-series data from manufacturing and industrial processes.
Using Spark Streaming and NiFi for the next generation of ETL in the enterpriseDataWorks Summit
In recent years, big data has moved from batch processing to stream-based processing since no one wants to wait hours or days to gain insights. Dozens of stream processing frameworks exist today and the same trend that occurred in the batch-based big data processing realm has taken place in the streaming world so that nearly every streaming framework now supports higher level relational operations.
On paper, combining Apache NiFi, Kafka, and Spark Streaming provides a compelling architecture option for building your next generation ETL data pipeline in near real time. What does this look like in an enterprise production environment to deploy and operationalized?
The newer Spark Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing with elegant code samples, but is that the whole story?
We discuss the drivers and expected benefits of changing the existing event processing systems. In presenting the integrated solution, we will explore the key components of using NiFi, Kafka, and Spark, then share the good, the bad, and the ugly when trying to adopt these technologies into the enterprise. This session is targeted toward architects and other senior IT staff looking to continue their adoption of open source technology and modernize ingest/ETL processing. Attendees will take away lessons learned and experience in deploying these technologies to make their journey easier.
Accelerating Data Science and Real Time Analytics at ScaleHortonworks
Gaining business advantages from big data is moving beyond just the efficient storage and deep analytics on diverse data sources to using AI methods and analytics on streaming data to catch insights and take action at the edge of the network.
https://hortonworks.com/webinar/accelerating-data-science-real-time-analytics-scale/
Driving Digital Transformation Through Global Data ManagementHortonworks
Using your data smarter and faster than your peers could be the difference between dominating your market and merely surviving. Organizations are investing in IoT, big data, and data science to drive better customer experience and create new products, yet these projects often stall in ideation phase to a lack of global data management processes and technologies. Your new data architecture may be taking shape around you, but your goal of globally managing, governing, and securing your data across a hybrid, multi-cloud landscape can remain elusive. Learn how industry leaders are developing their global data management strategy to drive innovation and ROI.
Presented at Gartner Data and Analytics Summit
Speaker:
Dinesh Chandrasekhar
Director of Product Marketing, Hortonworks
HDF 3.1 pt. 2: A Technical Deep-Dive on New Streaming FeaturesHortonworks
Hortonworks DataFlow (HDF) is the complete solution that addresses the most complex streaming architectures of today’s enterprises. More than 20 billion IoT devices are active on the planet today and thousands of use cases across IIOT, Healthcare and Manufacturing warrant capturing data-in-motion and delivering actionable intelligence right NOW. “Data decay” happens in a matter of seconds in today’s digital enterprises.
To meet all the needs of such fast-moving businesses, we have made significant enhancements and new streaming features in HDF 3.1.
https://hortonworks.com/webinar/series-hdf-3-1-technical-deep-dive-new-streaming-features/
Risk listening: monitoring for profitable growthDataWorks Summit
Historically, insurers used 50-, 100-, and 500-year flood models for risk evaluation and pricing. The extreme weather events we have experienced in 2017 alone prove how dated these methods really are.
To better understand their customers and potential current/future liability claims, forward thinking insurers are monitoring, analyzing, and integrating external data sources in real time (weather feeds from USGS.gov, news and stock feeds, and satellite imagery, to name just a few). By integrating and injecting these new data sources into their risk models and underwriting, insurers are better able to identify their risk appetites and effectively price.
The session will include real-world case studies, including how a global P and C insurer is now quickly analyzing and monitoring 50,000 customers and targets, gaining new insights into the market. Another example is a global reinsurance and specialty company that now leverages digital news channels to monitor its risk portfolio for early warning claims indicators to help drive down loss costs. CINDY MAIKE, VP Industry Solutions, GM of Insurance, Hortonworks, Inc.
Big Traffic, Big Trouble: Big Data Security AnalyticsDataWorks Summit
With the rise of IoT and the increasing complexity of applications, clouds, networks and infrastructure, the battle to keep your data and your infrastructure safe from attackers is getting harder. As groups of bad actors collaborate, sharing information and offering illegal access, and botnets as a service, terabits of attack can be launched cheaply. Meanwhile, it’s hard to find enough security analysts to catch and prevent these attacks.
This is where community collaboration and open source efforts like Apache Metron come in. Metron presents a comprehensive framework for application and network, security built on Apache Hadoop and open source Streaming Analytics(ie Apache Nifi, Apache Kafka) tool’s highly scalable data management and processing stacks. Advanced features like profiling, machine learning, and visualization work with real-time streaming detection to make your SOC analysts more efficient, while the intrinsic extensibility of open source helps your data scientists get security insights out of the lab and into production fast.
We will discuss and demonstrate how some real-world businesses and managed service providers are using Apache Metron to identify and solve security threats at scale, and some approaches and ideas for how the platform can fit into your security architecture.
Enterprise Data Science at Scale Meetup - IBM and Hortonworks - Oct 2017 Hortonworks
View the recording of the meet up, including the live demos, here: https://www.youtube.com/watch?v=uaJWB3K8lkg
Data science holds tremendous potential for organizations to uncover new insights and drivers of revenue and profitability. Big Data has brought the promise of doing data science at scale to enterprises, however this promise also comes with challenges for data scientists to continuously learn and collaborate. Data Scientists have many tools at their disposal such as notebooks like Juypter and Apache Zeppelin & IDEs such as RStudio with languages like R, Python, Scala and frameworks like Apache Spark. Given all the choices how do you best collaborate to build your model and then work through the development lifecycle to deploy it from test into production?
Why Data Science on Big Data?
In this meetup you will cover the attributes of a modern data science platform that empowers data scientists to build models using all the data in their data lake and foster continuous learning and collaboration. We will show a demo of Apache Zeppelin, Apache Spark, Apache Livy and Apache Hadoop with the focus on integration, security and model deployment and management.
Data Science at Scale DEMO
The demo will cover the Data Science life cycle: develop model in team environment, train the model with all the data on a Hadoop cluster, deploy model into production. The model will be a Spark ML model
Practical ML with Apache Spark
To deliver machine learning solutions data scientists not only need to fit models but also do familiar tasks data collection & wrangling, labelling, feature extraction and transformation, model tuning and evaluation, etc. Apache Spark provide provides a unified solution for all this under the same framework.
For example, one can use Spark SQL to generate training data from different sources and then pass it directly to MLlib for feature engineering and model tuning, instead of using Hive/Pig for the first half and then downloading the data to a single machine to train models in R. The latter is actually very common in practice but painful to maintain. Spark MLlib makes life easier for data scientists and machine learning engineers so that they can focus on building better ML models and applications.
We will discuss the underlying principles required to develop practical machine learning and data science pipelines and show some hands-on experience using Apache Spark to solve typical machine learning and data science problem. We will also have a short discussion about how Spark MLlib faces challenges from other machine learning libraries such as TensorFlow and XGBoost.
Journey to Big Data: Main Issues, Solutions, BenefitsDataWorks Summit
One of the most fruit aspects of being chosen as a partner bank is that you can have a backend that can communicate directly with the client's system. With this partnership, Banco Santander has led to running a large series of third party applications on their banking system for many years.
Banking is the most regulated sector to make day-to-day operations more interesting. Adapting the system to regulation is not optional and is mandatory. For today's banks internal and external audits are an important routine. Furthermore, considering that SCIB is a global player, this pattern is repeated in each country where the group exists.
It can be said that it is a really interesting compound! Various kinds of third party systems are installed in many countries, coexisting with our centralized system, transmitting information mutually, being adjusted manually, data is aggregated / integrated at the back office. Spaghetti comes to mind when considering that all data comes and goes without delay. More and more, regulators and auditors are able to perfectly identify the origin of each data. This often means that you need to manually interfere in order to fully locate the data.
Javier Nieto, active in Banco Santander's corporate investment banking architecture and innovation department, talks about integration challenges that Santander experienced when building an on-demand Data Lake to move to global big data.
Every business is looking for a game-changer in data science, machine learning, and AI. Most organizations are also looking for ways to tap into open-source and commercial data science tools such as Python, RStudio, Apache Spark, Jupyter, and Zeppelin notebooks, to accelerate predictive and machine learning model building and deployment while leveraging the scale, security and governance of the Hortonworks Data Platform and other commercial platforms.
Ana Maria Echeverri will demonstrate how to accelerate data science, machine learning, and deep learning workflows by using IBM Watson Studio, an integrated environment for data scientists, application developers, and subject matter experts. This suite of tools allows to collaboratively connect to data, wrangle that data and use it to build, train and deploy models at scale while using Open Source skills (i.e.: Python) and expanding into cognitive capabilities through access to Watson APIs to build AI-powered applications. If you love Python and want to tap into the power of IBM Watson, this is the session for you.
Data Acquisition Automation for NiFi in a Hybrid Cloud environment – the Path...DataWorks Summit
Liberty Global is one of the world’s largest international TV and broadband company, operating in multiple European countries, and with tens of millions of TV, broadband internet, telephony and mobile subscribers.
The Data Solutions team's journey started last year with a strategic project that aimed to implement a state of the art Hybrid Cloud Big Data platform. In this talk, the Manager and the Platform Architect are presenting the team’s data acquisition journey which begins with implementing NiFi flows with simple Get-Put pattern and, in its the final iteration, produces a solution capable of generating complex flows automatically, leading the path to the DataOps way of working.
Unlock Value from Big Data with Apache NiFi and Streaming CDCHortonworks
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. It provides an end-to-end platform that can collect, curate, analyze, and act on data in real-time, on-premises, or in the cloud with a drag-and-drop visual interface. It’s being used across industries on large amounts of data that had stored in isolation which made collaboration and analysis difficult.
Join industry experts from Hortonworks and Attunity as they explain how Apache NiFi and streaming CDC technology provides a distributed, resilient platform for unlocking the value of data in new ways.
Benefits of Transferring Real-Time Data to Hadoop at ScaleHortonworks
Today’s Big Data teams demand solutions designed for Big Data that are optimized, secure, and adaptable to changing workload requirements. Working together, Hortonworks, IBM, and Attunity have designed an integrated solution that transfers large volumes of data to a platform that can handle rapid ingest, processing and analysis of data of all types from all sources, at scale.
https://hortonworks.com/webinar/benefits-transferring-real-time-data-hadoop-scale-ibm-hortonworks-attunity/
With the explosive growth of IoT, the edge is predicted to grow to 25 billion connected devices by 2020. But, enterprises are still struggling to manage hundreds of devices that they have deployed. Not from a device management standpoint but more from a data management standpoint. Enterprises are unable to capture and process data directly from the edge devices for immediate analysis and gaining real-time actionable intelligence. So, if that is not possible, IoT initiatives are failing to become successful. How can an enterprise gather real-time data from edge devices? How can it change the behavior of such data collection processes? How can it ensure that data will be analyzed immediately? How can it understand the lineage of the data from edge to enterprise? How can it manage edge agents? What is an edge management hub? Attend this session to get a detailed understanding of key edge management challenges and how to address them with the correct solutions.
Data and analytics are at the heart of the digital transformation. Implementing a modern data platform can be challenging; moreover, success requires a shift in culture. Andreas will discuss the ways Munich Re drives cultural and technological change within their company, focusing on three key elements: people, processes, and technology. What does it mean to be a data-driven organization? How can we provide self-service analytics to our internal and external customers in an agile way? How do we get the most value out of our big data lake? How does Munich Re balance technology and culture to meet the data demands of their business?
Speaker
Andreas Kohlmaier, Head of Data Engineering, Munich Re
IIoT + Predictive Analytics: Solving for Disruption in Oil & Gas and Energy &...DataWorks Summit
The electric grid has evolved from linear generation and delivery to a complex mix of renewables, prosumer-generated electricity, and electric vehicles (EVs). Smart meters are generating loads of data. As a result, traditional forecasting models and technologies can no longer adequately predict supply and demand. Extreme weather, an aging infrastructure, and the burgeoning worldwide population are also contributing to increased outage frequency.
In oil and gas, commodity pricing pressures, resulting workforce reductions, and the need to reduce failures, automate workflows, and increase operational efficiencies are driving operators to shift analytics initiatives to advanced data-driven applications to complement physics-based tools.
While sensored equipment and legacy surveillance applications are generating massive amounts of data, just 2% is understood and being leveraged. Operationalizing it along with external datasets enables a shift from time-based to condition-based maintenance, better forecasting and dramatic reductions in unplanned downtime.
The session includes plenty of real-world anecdotes. For example, how an electric power holding company reduced the time it took to investigate energy theft from six months to less than one hour, producing theft leads in minutes and an expected multi-million dollar ROI. How a global offshore contract drilling services provider implemented an open source IIoT solution across its fleet of assets in less than a year, enabling remote monitoring, predictive analytics and maintenance.
Key takeaways:
• How are new processes for data collection, storage and democratization making it accessible and usable at scale?
• Beyond time series data, what other data types are important to assess?
• What advantage are open source technologies providing to enterprises deploying IIoT?
• Why is collaboration important across industrial verticals to increase IIoT open source adoption?
Speaker
Kenneth Smith, General Manager, Energy, Hortonworks
Hortonworks DataFlow (HDF) 3.3 - Taking Stream Processing to the Next LevelHortonworks
The HDF 3.3 release delivers several exciting enhancements and new features. But, the most noteworthy of them is the addition of support for Kafka 2.0 and Kafka Streams.
https://hortonworks.com/webinar/hortonworks-dataflow-hdf-3-3-taking-stream-processing-next-level/
Registry is a central metadata repository that allows users to collaboratively use Schema definitions for stream processing.
Stream Analytics Manager, provides a framework to build Streaming applications faster, easier.
Running Apache NiFi with Apache Spark : Integration OptionsTimothy Spann
A walk-through of various options in integration Apache Spark and Apache NiFi in one smooth dataflow. There are now several options in interfacing between Apache NiFi and Apache Spark with Apache Kafka and Apache Livy.
The Atlas/ Ranger integration represents a paradigm shift for big data governance and security. Enterprises can now implement dynamic classification-based security policies, in addition to role-based security. Ranger’s centralized platform empowers data administrators to define security policy based on Atlas metadata tags or attributes and apply this policy in real-time to the entire hierarchy of data assets including databases, tables and columns.
Make Streaming IoT Analytics Work for YouHortonworks
Download Hortonworks DataFlow (HDF™) here - http://hortonworks.com/downloads/#dataflow. Making Streaming IoT Analytics Work For You With Apache NiFi, Storm, Raspberry Pi and more.
Its Finally Here! Building Complex Streaming Analytics Apps in under 10 min w...DataWorks Summit
Imagine if you could build and deploy an end to end complex streaming analytics app on a streaming engine like Storm or Flink that did the following:
1. Joining Streams
2. Aggregations over Windows (Time or Count based)
3. Complex Event Processing
4. Pattern Matching
5. Model scoring.
Now imagine implementing and deploying this without writing a single line of code in under 10 mins.
Imagine no more; it is indeed here. In this talk, we will discuss an exciting open source project led by Hortonworks on building and deploying streaming applications using a drag and drop paradigm.
Curing the Kafka blindness—Streams Messaging ManagerDataWorks Summit
Companies who use Kafka today struggle with monitoring and managing Kafka clusters. Kafka is a key backbone of IoT streaming analytics applications. The challenge is understanding what is going on overall in the Kafka cluster including performance, issues and message flows. No open source tool caters to the needs of different users that work with Kafka: DevOps/developers, platform team, and security/governance teams. See how the new Hortonworks Streams Messaging Manager enables users to visualize their entire Kafka environment end-to-end and simplifies Kafka operations.
In this session learn how SMM visualizes the intricate details of how Apache Kafka functions in real time while simultaneously surfacing every nuance of tuning, optimizing, and measuring input and output. SMM will assist users to quickly understand and operate Kafka while providing the much-needed transparency that sophisticated and experienced users need to avoid all the pitfalls of running a Kafka cluster.
Develop and deploy Streaming Analytics applications visually with bindings for streaming engine and multiple source/sinks, rich set of streaming operators and operational lifecycle management. Streaming Analytics Manager makes it easy to develop, monitor streaming applications and also provides analytics of data thats being processed by streaming application.
Stream processing has become the defacto standard for building real-time ETL and Stream Analytics applications. We see batch workloads move into Stream processing to act on the data and derive insights faster. With the explosion of data with "Perishable Insights" such IoT and machine-generated data, Stream Processing + Predictive Analytics is driving tremendous business value. This is evidenced by the explosion of Stream Processing frameworks like proven and evolving Apache Storm and newer frameworks such as Apache Flink, Apache Apex, and Spark Streaming.
Today, users have to choose and try to understand the benefits of each of these frameworks and not only that they have to learn the new APIs and also operationalize their applications. To create value faster, we are introducing new open source tool - Streamline. It is a self-service framework that will ease building streaming application and deploy the streaming application across multiple frameworks/engines that users prefer in a snap. It simplifies integration with Machine Learning models for scoring and classification of data for Predictive Analytics. It provides an elegant way to build Analytics dashboards to derive business insights out of the streaming data and to allow the business users to consume it easily.
In this talk, we will outline the fundamentals of real-time stream processing and demonstrate Streamline capabilities to show how it simplifies building real-time streaming analytics applications.
Speaker:
Priyank Shah, Staff Software Engineer, Hortonworks
Future of Data New Jersey - HDF 3.0 Deep DiveAldrin Piri
Presentation on new features of HDF 3.0 presented on August 8, 2017 to the Future of Data: New Jersey Meetup group. This event was hosted by Honeywell in Morris Plains, NJ.
https://www.meetup.com/futureofdata-princeton/events/240972326/
Presentation from Future of Data Boston Meetup on Oct 24, 2017.
Streaming data is rich with insights but these insights can be difficult to find due to the difficulty of developing and deploying streaming applications. During this presentation we will show how to build and deploy a complex streaming application in a few minutes using open source tools. First we will build an application using Streaming Analytics Manager and Schema Registry that ingests data into Apache Druid. Then we will use Apache Superset to build beautiful, informative dashboards.
Curing the Kafka Blindness – Streams Messaging ManagerDataWorks Summit
Companies who use Kafka today struggle with monitoring and managing Kafka clusters. Kafka is a key backbone of IoT streaming analytics applications. The challenge is understanding what is going on overall in the Kafka cluster including performance, issues and message flows. No open source tool caters to the needs of different users that work with Kafka: DevOps/developers, platform team, and security/governance teams. See how the new Hortonworks Streams Messaging Manager enables users to visualize their entire Kafka environment end-to-end and simplifies Kafka operations.
In this session learn how SMM visualizes the intricate details of how Apache Kafka functions in real time while simultaneously surfacing every nuance of tuning, optimizing, and measuring input and output. SMM will assist users to quickly understand and operate Kafka while providing the much-needed transparency that sophisticated and experienced users need to avoid all the pitfalls of running a Kafka cluster.
Speaker: Andrew Psaltis, Principal Solution Engineer, Hortonworks
Predicting Customer Experience through Hadoop and Customer Behavior GraphsHortonworks
Enhancing a customer experience has become essential for communication service providers to effectively manage customer churn and build a strong, long lasting relationship with their customers. This has become increasingly challenging as customer interactions occur across multiple channels. Understanding customer behavior and how it applies across channels is the key to ensuring the best level of experience is achieved by each customer.
In this webinar Hortonworks and Apigee discuss how service providers can capture and visualize customer behavior across customer interaction points like call center events (IVR and chat) and combine it with network data, to predict customer calls and patterns of digital channel abandonment using Hadoop and predictive analysis and visualization tools..
We will identify ways to develop a 360 degree view across a customer’s household through an HDP Data Lake and visualize customer interaction patterns and predict expected behavior using Apigee Insights to identify and initiate the Next-Best-Action for a customer to ensure a superior level of customer experience.
Similar to Curing Kafka Blindness with Hortonworks Streams Messaging Manager (20)
IoT Predictions for 2019 and Beyond: Data at the Heart of Your IoT StrategyHortonworks
Forrester forecasts* that direct spending on the Internet of Things (IoT) will exceed $400 Billion by 2023. From manufacturing and utilities, to oil & gas and transportation, IoT improves visibility, reduces downtime, and creates opportunities for entirely new business models.
But successful IoT implementations require far more than simply connecting sensors to a network. The data generated by these devices must be collected, aggregated, cleaned, processed, interpreted, understood, and used. Data-driven decisions and actions must be taken, without which an IoT implementation is bound to fail.
https://hortonworks.com/webinar/iot-predictions-2019-beyond-data-heart-iot-strategy/
Getting the Most Out of Your Data in the Cloud with CloudbreakHortonworks
Cloudbreak, a part of Hortonworks Data Platform (HDP), simplifies the provisioning and cluster management within any cloud environment to help your business toward its path to a hybrid cloud architecture.
https://hortonworks.com/webinar/getting-data-cloud-cloudbreak-live-demo/
Interpretation Tool for Genomic Sequencing Data in Clinical EnvironmentsHortonworks
The healthcare industry—with its huge volumes of big data—is ripe for the application of analytics and machine learning. In this webinar, Hortonworks and Quanam present a tool that uses machine learning and natural language processing in the clinical classification of genomic variants to help identify mutations and determine clinical significance.
Watch the webinar: https://hortonworks.com/webinar/interpretation-tool-genomic-sequencing-data-clinical-environments/
In this exclusive Premier Inside Out, you will hear from Druid committer Slim Bouguerra, Staff Software Engineer and Product Manager Will Xu. These Hortonworkers will explain the vision of these components, review new features, share some best practices and answer your questions.
View the webinar here: https://hortonworks.com/webinar/hortonworks-premier-apache-druid/
Hortonworks DataFlow (HDF) 3.1 - Redefining Data-In-Motion with Modern Data A...Hortonworks
Join the Hortonworks product team as they introduce HDF 3.1 and the core components for a modern data architecture to support stream processing and analytics.
You will learn about the three main themes that HDF addresses:
Developer productivity
Operational efficiency
Platform interoperability
https://hortonworks.com/webinar/series-hdf-3-1-redefining-data-motion-modern-data-architectures/
4 Essential Steps for Managing Sensitive DataHortonworks
Data is growing in data lakes, so are security and compliance risks. These risks stem from storing and processing sensitive data. In this webinar, we will go through a 4 step process to proactively discover and manage sensitive data within big data environments.
https://hortonworks.com/webinar/4-essential-steps-managing-sensitive-data-data-lake/
5 Steps to Create a Company Culture that Embraces the Power of DataHortonworks
A business culture that relies on gut checks and feelings for business decisions is a hard hurdle to overcome. Company culture is often the biggest barrier to moving a company toward data-driven decisions. There's a way to get there, when driven by company leaders. Here's how you do that:
1. Get comfortable with softer data sets
2. Must come from top-down
3. A structure where goals are clear
4. Right role for technology
5. Clear stewardship around data
Exploring the Heated-and Completely Unnecessary- Data Lake DebateHortonworks
When it comes to the data lakes and data warehouses, there’s no shortage of controversy: Is one better than the other? The real answer is, there’s no need for heated debate—a data lake actually complements the data warehouse.
Integrating a data lake with your EDW is really just an evolution of architecture that can provide you with a cross-environment that allows you to explore data creatively to yield great business insights. However, there’s a trick to making it work: EDW optimization.
https://hortonworks.com/webinar/exploring-heated-completely-unnecessary-data-lake-debate/
In this webinar, we will hear from Mark McKinney, Director – Enterprise Data Analytics at Sprint about the business drivers, key success factors, and challenges faced while undertaking Sprint’s data modernization journey. You will hear how Sprint set about establishing a Hadoop data lake, ingested data from multiple environments, and overcame key skill shortages. You will also hear from Diyotta and Hortonworks about best practices for modernizing your data architecture to support transformational business initiatives.
https://hortonworks.com/webinar/sprints-data-modernization-journey/
Modernize Your Existing EDW with IBM Big SQL & Hortonworks Data PlatformHortonworks
Find out how Hortonworks and IBM help you address these challenges to enable success to optimize your existing EDW environment.
https://hortonworks.com/webinar/modernize-existing-edw-ibm-big-sql-hortonworks-data-platform/
Streamline Apache Hadoop Operations with Apache Ambari and SmartSenseHortonworks
Apache Ambari 2.5 helps customers simplify the experience for provisioning, managing, monitoring, securing and troubleshooting Hadoop deployments. Find out how the combination of Ambari and SmartSense delivers a path to success to help IT get Hadoop up and running effectively. The end result – you get the full business impact management and benefits of Big Data for your organization.
https://hortonworks.com/webinar/streamline-apache-hadoop-operations-apache-ambari-smartsense/
How to Architect and Omnichannel Retail Solution to Achieve Real-Time Custome...Hortonworks
Hortonworks and SAS team up to discuss:
*What technology is being used in this Global Retailer use case to collect and synchronize customer data across all channels
*How the Retailer is able to analyze the data in real-time
How they predict and optimize customer interactions to improve sales
*The open source and cloud technologies that are on the horizon
Webinar Sept. 7, 2017
https://hortonworks.com/webinar/architect-omnichannel-retail-solution-achieve-real-time-customer-insights/
The Life of a Hadoop Administrator, with and without SmartSenseHortonworks
This cartoon shows how easy it is to troubleshoot and resolve support cases quickly, saving a Hadoop Administrator hours of time. By providing up-front access to diagnostic information needed to resolve issues, it helps reduce the back-and-forth nature of troubleshooting that consumes valuable time and resources.
Enterprise Data Warehouse Optimization: 7 Keys to SuccessHortonworks
You have a legacy system that no longer meet the demands of your current data needs, and replacing it isn’t an option. But don’t panic: Modernizing your traditional enterprise data warehouse is easier than you may think.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
SMM UI powered by first class REST Services via the SMM REST Admin Server
Monitoring Rest Endpoints can be used to integrate with APM/Alerting/Ticketing solutions
Powered by Apache Knox
Installed via an Ambari Management Pack on a target HDP/HDF cluster where Kafka Service is running
Supports both HDP & HDF Platforms
Open Source / AGPL licensed projected led by Hortonworks