Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
This session goes beyond the well-known examples of huge companies such as Facebook or Google with millions of users. Instead, this session explains the "big" paradigm and technology shift for your company. See several use cases how big data enables small / medium-sized companies to gain insight into new business opportunities (and threats) and how big data stands to transform much of what the modern enterprise is today.
Learn about solving the unique challenges of big data without an own research lab or several big data experts in your company. Learn how to implement the relevant use cases for your company with low costs and efforts by using open source frameworks, which simplify working with big data a lot.
Big Data beyond Apache Hadoop - How to integrate ALL your DataKai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives.
This session shows the powerful combination of Apache Hadoop and Apache Camel to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code. Besides supporting the integration of all different technologies and data formats, Apache Camel also offers an easy, standardized DSL to transform, split or filter incoming data using the Enterprise Integration Patterns (EIP). Therefore, Apache Hadoop and Apache Camel are a perfect match for processing big data on the Java platform.
How to create intelligent Business Processes thanks to Big Data (BPM, Apache ...Kai Wähner
BPM is established, tools are stable, many companies use it successfully. However, today's business processes are based on data from relational databases or web services. Humans make decisions due to this information. Companies also use business intelligence and other tools to analyze their data. Though, business processes are executed without access to this important information because technical challenges occur when trying to integrate big masses of data from many different sources into the BPM engine. Additionally, bad data quality due to duplication, incompleteness and inconsistency prevents humans from making good decisions. That is status quo. Companies miss a huge opportunity here!
This session explains how to achieve intelligent business processes, which use big data to improve performance and outcomes. A live demo shows how big data can be integrated into business processes easily - just with open source tooling. In the end, the audience will understand why BPM needs big data to achieve intelligent business processes.
Intelligent Business Process Management Suites (iBPMS) - The Next-Generation ...Kai Wähner
I had a talk at ECSA 2014 in Vienna: The Next-Generation BPM for a Big Data World: Intelligent Business Process Management Suites (iBPMS), sometimes also abbreviated iBPM. I want to share the slides with you. The slides include an example how to implement iBPMS easily with the TIBCO middleware stack: TIBCO AMX BPM + BusinessWorks + StreamBase + Tibbr.
"Big Data beyond Apache Hadoop - How to Integrate ALL your Data" - JavaOne 2013Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives.
This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
WJAX 2013 Slides online: Big Data beyond Apache Hadoop - How to integrate ALL...Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data. Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives. This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
Data Warehousing in the Cloud: Practical Migration Strategies SnapLogic
Dave Wells of Eckerson Group discusses why cloud data warehousing has become popular, the many benefits, and the corresponding challenges. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema, or rebuilding of data pipelines is needed.
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
Big Data beyond Apache Hadoop - How to integrate ALL your DataKai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives.
This session shows the powerful combination of Apache Hadoop and Apache Camel to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code. Besides supporting the integration of all different technologies and data formats, Apache Camel also offers an easy, standardized DSL to transform, split or filter incoming data using the Enterprise Integration Patterns (EIP). Therefore, Apache Hadoop and Apache Camel are a perfect match for processing big data on the Java platform.
How to create intelligent Business Processes thanks to Big Data (BPM, Apache ...Kai Wähner
BPM is established, tools are stable, many companies use it successfully. However, today's business processes are based on data from relational databases or web services. Humans make decisions due to this information. Companies also use business intelligence and other tools to analyze their data. Though, business processes are executed without access to this important information because technical challenges occur when trying to integrate big masses of data from many different sources into the BPM engine. Additionally, bad data quality due to duplication, incompleteness and inconsistency prevents humans from making good decisions. That is status quo. Companies miss a huge opportunity here!
This session explains how to achieve intelligent business processes, which use big data to improve performance and outcomes. A live demo shows how big data can be integrated into business processes easily - just with open source tooling. In the end, the audience will understand why BPM needs big data to achieve intelligent business processes.
Intelligent Business Process Management Suites (iBPMS) - The Next-Generation ...Kai Wähner
I had a talk at ECSA 2014 in Vienna: The Next-Generation BPM for a Big Data World: Intelligent Business Process Management Suites (iBPMS), sometimes also abbreviated iBPM. I want to share the slides with you. The slides include an example how to implement iBPMS easily with the TIBCO middleware stack: TIBCO AMX BPM + BusinessWorks + StreamBase + Tibbr.
"Big Data beyond Apache Hadoop - How to Integrate ALL your Data" - JavaOne 2013Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data.
Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives.
This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
WJAX 2013 Slides online: Big Data beyond Apache Hadoop - How to integrate ALL...Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data. Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives. This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
Data Warehousing in the Cloud: Practical Migration Strategies SnapLogic
Dave Wells of Eckerson Group discusses why cloud data warehousing has become popular, the many benefits, and the corresponding challenges. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema, or rebuilding of data pipelines is needed.
Slides: Proven Strategies for Hybrid Cloud Computing with Mainframes — From A...DATAVERSITY
Mainframes continue to perform mission-critical transaction processing and contain massive amounts of core business data. But digital transformation initiatives and cloud computing have created both opportunities and challenges for unlocking and utilizing this data. Qlik and AWS will share some of the proven strategies from successful customer deployments across a range of different mainframe to cloud use cases, including legacy application modernization, data analytics, and data migrations.
In this presentation, you will learn how to:
• Replicate very large volumes of mainframe data in real-time to the cloud
• Automate the creation of analytics-ready data lakes and data warehouses
• Achieve a 30% reduction in cost of compute
Learn how to solve the top 3 challenges Snowflake customers face, and what you can do to ensure high-performance, intelligent analytics at any scale. Ideal for those currently using Snowflake and those considering it.
https://www.brighttalk.com/webcast/18317/422499
Enabling digital business with governed data lakeKaran Sachdeva
Digital business is enabled by Artificial intelligence, Machine learning, and data science. Artificial intelligence and machine learning are dependent on right Information architecture and data foundation. Governed data lake infused with governance and data science platform gives you the power to take the organization in the digital transformation and AI journey.
Best Practices for Building a Warehouse QuicklyWhereScape
Key factors that influence a successful data warehouse task are:
+ Implementing the True Development Approach
+ Choosing a Rapid Development Product
+ Ensuring Data Availability
+ Involving Key Users throughout the whole project
+ Relying on a Pragmatic Governance Framework
+ Utilizing experienced Team Members
+ Selecting the right Hardware, Infrastructure Technology
Analyzing Billions of Data Rows with Alteryx, Amazon Redshift, and TableauDATAVERSITY
Got lots of data? So does Amaysim, a leading Australian telecom provider, with its billions of rows of data. The organization successfully empowers its small team of data analysts with self-service data analytics platforms so they can easily access the data they need, perform advanced analytics, and visualize findings for all stakeholders. Register for this session and learn how Amaysim uses the Alteryx-Redshift-Tableau BI stack to easily and quickly:
Extract data from their data warehouse and blend and enrich it with other sources
Give data analytical context by running statistical, predictive, and deep geo-spatial analytics
Create visualizations from analytics and then update Tableau Workbooks directly from Alteryx, or publish the results in Amazon Redshift, for easy direct access for their stakeholders from Tableau
Hear from Adrian Loong, Alteryx Analytics Certified Expert (ACE), and product marketers from AWS and Alteryx on how organizations can use Alteryx, Amazon Redshift and Tableau to enable data analysts to spin up new self-service analytics instances to enable fast investigation for critical business decisions.
IBM Governed data lake is a value-driven big data platform journey. The journey starts by ingesting wide variety of data, governing it, applying data science and machine learning on it to produce actionable insights.
Framework and Product Comparison for Big Data Log Analytics and ITOA Kai Wähner
IT systems and applications generate more and more machine data due to millions of mobile devices, Internet of Things, social network users, and other new emerging technologies. However, organizations experience challenges when monitoring and managing their IT systems and technology infrastructure. They struggle with network and server monitoring/troubleshooting, security analysis, custom application monitoring and debugging, compliance standards, and others.
This session discusses how to solve the challenges of analyzing Terabytes and more of different log data to leverage the “digital business” – a term defined by Gartner and others to explain that IT is not just a tool to enable a business, but IT is the business.
The main part of the session compares different solutions for operational intelligence and log analytics to create “digital business”, such as Splunk, TIBCO LogLogic and the open source “ELK stack” (ElasticSearch, Logstash, Kibana).
A common use case will be demonstrated in a live demo: Monitoring, analyzing and correlating a complex E-Commerce transaction running through different custom applications such as a Java EE web application, an integration middleware and analytics processes.
The end of the session explains the distinction of the discussed solutions to Apache Hadoop, and how they can complement each other in a big data architecture.
Uma introdução à malha de dados e as motivações por trás dela: os modos de falhas de paradigmas anteriores de gerenciamento de big data. A proposta de Zhamak Dehghani é comparar e contrastar a malha de dados com as abordagens existentes de gerenciamento de big data, apresentando os componentes técnicos que sustentam a arquitetura de software.
Introduction to Segment, Analytics API and Customer Data Platform. (Demo: Segment, AWS Redshift, Redash, Segment and GTM Alternatives) (Frontend Fighters Edition)
Recommended links:
https://segment.com/ - Analytics API and Customer Data Platform
https://open.segment.com/ - Open Source Projects of Segment
https://segment.com/docs/ - Documentation of Segment
https://redash.io/ - Open Sorce Data Dashboard
https://aws.amazon.com/redshift/ - Data Warehouse Solution
https://quicksight.aws/ - Business Analytics Service
https://www.ghostery.com/ - Tracker Detector
Keywords: business agility, tag managers, data-driven
It covers the basic of analytics, types of analytics, tools, and techniques of analytics, and a briefcase study to demonstrate the predictive analytics with decision tree algorithm of machine learning
The quest for the insight-driven enterprise has spurned a mass exodus to the cloud. But cloud data ecosystems can be very complex with multiple data storage and processing options.
These slides-based on the webinar featuring leading IT analyst firm EMA, Amazon Web Services (AWS), and Trifacta--will help you: understand technology trends that simplify your analytics modernization journey; learn best practices to operationalize data management on AWS; establish operational excellence leveraging AWS data storage and processing; accelerate time-to-value for analytics projects with data preparation on AWS.
Traditional BI vs. Business Data Lake – A ComparisonCapgemini
Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses on structured data but they are not designed to handle unstructured data.
For these systems Big Data brings big problems because the data that flows in may be either structured or unstructured. That makes them hugely limited when it comes to delivering Big Data benefits.
The way forward is a complete rethink of the way we use BI - in terms of how the data is ingested, stored and analyzed.
More information: http://www.capgemini.com/big-data-analytics/pivotal
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Webinar: Customer Experience in Banking - a CTO's PerspectiveDataStax
Traditional banks are being disrupted by digital natives that can integrate data and provide seamless, real-time customer experiences across all touchpoints. Watch this webinar on-demand, featuring Brett Brunick, Chief Technology Officer of TCF Bank, and Shubhra Sinha, VP of Portfolio Marketing at DataStax. Brunick and Sinha discuss delivering powerful, meaningful, and innovative experiences to banking customers across branches and digital channels.
View recording: https://youtu.be/ptNwfGnQj1o
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
How to Power Innovation with Geo-Distributed Data Management in Hybrid CloudDataStax
Most enterprises understand the value of hybrid cloud. In fact, your enterprise is already working in a multi-cloud or hybrid cloud environment, whether you know it or not. View this SlideShare to gain a greater understanding of the requirements of a geo-distributed cloud database in hybrid and multi-cloud environments.
View recording: https://youtu.be/tHukS-p6lUI
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Advance Data Visualization and Storytelling Virtual WorkshopCCG
Join CCG and Microsoft for a virtual workshop, hosted by Senior BI Architect, Martin Rivera, taking you through a journey of advanced data visualization and storytelling.
Semantic Data Management: a research area of the chair for technologies and management of digital transformation from the university of Wuppertal, Germany.
For more information, see here: https://www.tmdt.uni-wuppertal.de/de
EMC World 2014 Breakout: Move to the Business Data Lake – Not as Hard as It S...Capgemini
Rip and replace isn't a good approach to IT change. When looking at Hadoop, MPP, in-memory and predictive analytics the challenge is making them co-exist with current solutions.
Learn how Capgemini’s Pivotal CoE utilizes Cloud Foundry and PivotalOne to help businesses adopt new technologies without losing the value of current investments.
Presented by Michael Wood of Pivotal and Steve Jones, Global Director, Strategy, Big Data and Analytics, Capgemini, at EMC World 2014.
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Every now and then something comes along that has the potential to change the face of business as we know it. Currently big data is being touted as that thing, and CIOs everywhere need to get a grip on what it is and how it can benefit their company.
Learn how to solve the top 3 challenges Snowflake customers face, and what you can do to ensure high-performance, intelligent analytics at any scale. Ideal for those currently using Snowflake and those considering it.
https://www.brighttalk.com/webcast/18317/422499
Enabling digital business with governed data lakeKaran Sachdeva
Digital business is enabled by Artificial intelligence, Machine learning, and data science. Artificial intelligence and machine learning are dependent on right Information architecture and data foundation. Governed data lake infused with governance and data science platform gives you the power to take the organization in the digital transformation and AI journey.
Best Practices for Building a Warehouse QuicklyWhereScape
Key factors that influence a successful data warehouse task are:
+ Implementing the True Development Approach
+ Choosing a Rapid Development Product
+ Ensuring Data Availability
+ Involving Key Users throughout the whole project
+ Relying on a Pragmatic Governance Framework
+ Utilizing experienced Team Members
+ Selecting the right Hardware, Infrastructure Technology
Analyzing Billions of Data Rows with Alteryx, Amazon Redshift, and TableauDATAVERSITY
Got lots of data? So does Amaysim, a leading Australian telecom provider, with its billions of rows of data. The organization successfully empowers its small team of data analysts with self-service data analytics platforms so they can easily access the data they need, perform advanced analytics, and visualize findings for all stakeholders. Register for this session and learn how Amaysim uses the Alteryx-Redshift-Tableau BI stack to easily and quickly:
Extract data from their data warehouse and blend and enrich it with other sources
Give data analytical context by running statistical, predictive, and deep geo-spatial analytics
Create visualizations from analytics and then update Tableau Workbooks directly from Alteryx, or publish the results in Amazon Redshift, for easy direct access for their stakeholders from Tableau
Hear from Adrian Loong, Alteryx Analytics Certified Expert (ACE), and product marketers from AWS and Alteryx on how organizations can use Alteryx, Amazon Redshift and Tableau to enable data analysts to spin up new self-service analytics instances to enable fast investigation for critical business decisions.
IBM Governed data lake is a value-driven big data platform journey. The journey starts by ingesting wide variety of data, governing it, applying data science and machine learning on it to produce actionable insights.
Framework and Product Comparison for Big Data Log Analytics and ITOA Kai Wähner
IT systems and applications generate more and more machine data due to millions of mobile devices, Internet of Things, social network users, and other new emerging technologies. However, organizations experience challenges when monitoring and managing their IT systems and technology infrastructure. They struggle with network and server monitoring/troubleshooting, security analysis, custom application monitoring and debugging, compliance standards, and others.
This session discusses how to solve the challenges of analyzing Terabytes and more of different log data to leverage the “digital business” – a term defined by Gartner and others to explain that IT is not just a tool to enable a business, but IT is the business.
The main part of the session compares different solutions for operational intelligence and log analytics to create “digital business”, such as Splunk, TIBCO LogLogic and the open source “ELK stack” (ElasticSearch, Logstash, Kibana).
A common use case will be demonstrated in a live demo: Monitoring, analyzing and correlating a complex E-Commerce transaction running through different custom applications such as a Java EE web application, an integration middleware and analytics processes.
The end of the session explains the distinction of the discussed solutions to Apache Hadoop, and how they can complement each other in a big data architecture.
Uma introdução à malha de dados e as motivações por trás dela: os modos de falhas de paradigmas anteriores de gerenciamento de big data. A proposta de Zhamak Dehghani é comparar e contrastar a malha de dados com as abordagens existentes de gerenciamento de big data, apresentando os componentes técnicos que sustentam a arquitetura de software.
Introduction to Segment, Analytics API and Customer Data Platform. (Demo: Segment, AWS Redshift, Redash, Segment and GTM Alternatives) (Frontend Fighters Edition)
Recommended links:
https://segment.com/ - Analytics API and Customer Data Platform
https://open.segment.com/ - Open Source Projects of Segment
https://segment.com/docs/ - Documentation of Segment
https://redash.io/ - Open Sorce Data Dashboard
https://aws.amazon.com/redshift/ - Data Warehouse Solution
https://quicksight.aws/ - Business Analytics Service
https://www.ghostery.com/ - Tracker Detector
Keywords: business agility, tag managers, data-driven
It covers the basic of analytics, types of analytics, tools, and techniques of analytics, and a briefcase study to demonstrate the predictive analytics with decision tree algorithm of machine learning
The quest for the insight-driven enterprise has spurned a mass exodus to the cloud. But cloud data ecosystems can be very complex with multiple data storage and processing options.
These slides-based on the webinar featuring leading IT analyst firm EMA, Amazon Web Services (AWS), and Trifacta--will help you: understand technology trends that simplify your analytics modernization journey; learn best practices to operationalize data management on AWS; establish operational excellence leveraging AWS data storage and processing; accelerate time-to-value for analytics projects with data preparation on AWS.
Traditional BI vs. Business Data Lake – A ComparisonCapgemini
Traditional Business Intelligence (BI) systems provide various levels and kinds of analyses on structured data but they are not designed to handle unstructured data.
For these systems Big Data brings big problems because the data that flows in may be either structured or unstructured. That makes them hugely limited when it comes to delivering Big Data benefits.
The way forward is a complete rethink of the way we use BI - in terms of how the data is ingested, stored and analyzed.
More information: http://www.capgemini.com/big-data-analytics/pivotal
Caserta Concepts, Datameer and Microsoft shared their combined knowledge and a use case on big data, the cloud and deep analytics. Attendes learned how a global leader in the test, measurement and control systems market reduced their big data implementations from 18 months to just a few.
Speakers shared how to provide a business user-friendly, self-service environment for data discovery and analytics, and focus on how to extend and optimize Hadoop based analytics, highlighting the advantages and practical applications of deploying on the cloud for enhanced performance, scalability and lower TCO.
Agenda included:
- Pizza and Networking
- Joe Caserta, President, Caserta Concepts - Why are we here?
- Nikhil Kumar, Sr. Solutions Engineer, Datameer - Solution use cases and technical demonstration
- Stefan Groschupf, CEO & Chairman, Datameer - The evolving Hadoop-based analytics trends and the role of cloud computing
- James Serra, Data Platform Solution Architect, Microsoft, Benefits of the Azure Cloud Service
- Q&A, Networking
For more information on Caserta Concepts, visit our website: http://casertaconcepts.com/
Webinar: Customer Experience in Banking - a CTO's PerspectiveDataStax
Traditional banks are being disrupted by digital natives that can integrate data and provide seamless, real-time customer experiences across all touchpoints. Watch this webinar on-demand, featuring Brett Brunick, Chief Technology Officer of TCF Bank, and Shubhra Sinha, VP of Portfolio Marketing at DataStax. Brunick and Sinha discuss delivering powerful, meaningful, and innovative experiences to banking customers across branches and digital channels.
View recording: https://youtu.be/ptNwfGnQj1o
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
How to Power Innovation with Geo-Distributed Data Management in Hybrid CloudDataStax
Most enterprises understand the value of hybrid cloud. In fact, your enterprise is already working in a multi-cloud or hybrid cloud environment, whether you know it or not. View this SlideShare to gain a greater understanding of the requirements of a geo-distributed cloud database in hybrid and multi-cloud environments.
View recording: https://youtu.be/tHukS-p6lUI
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Advance Data Visualization and Storytelling Virtual WorkshopCCG
Join CCG and Microsoft for a virtual workshop, hosted by Senior BI Architect, Martin Rivera, taking you through a journey of advanced data visualization and storytelling.
Semantic Data Management: a research area of the chair for technologies and management of digital transformation from the university of Wuppertal, Germany.
For more information, see here: https://www.tmdt.uni-wuppertal.de/de
EMC World 2014 Breakout: Move to the Business Data Lake – Not as Hard as It S...Capgemini
Rip and replace isn't a good approach to IT change. When looking at Hadoop, MPP, in-memory and predictive analytics the challenge is making them co-exist with current solutions.
Learn how Capgemini’s Pivotal CoE utilizes Cloud Foundry and PivotalOne to help businesses adopt new technologies without losing the value of current investments.
Presented by Michael Wood of Pivotal and Steve Jones, Global Director, Strategy, Big Data and Analytics, Capgemini, at EMC World 2014.
Estimating the Total Costs of Your Cloud Analytics Platform DATAVERSITY
Organizations today need a broad set of enterprise data cloud services with key data functionality to modernize applications and utilize machine learning. They need a platform designed to address multi-faceted needs by offering multi-function Data Management and analytics to solve the enterprise’s most pressing data and analytic challenges in a streamlined fashion. They need a worry-free experience with the architecture and its components.
A complete machine learning infrastructure cost for the first modern use case at a midsize to large enterprise will be anywhere from $2M to $14M. Get this data point as you take the next steps on your journey.
Becoming Data-Driven Through Cultural ChangeCloudera, Inc.
We've arrived at a crossroads. Big data is an initiative every business knows they should take on in order to evolve their business, but no one knows how to tackle the project.
This is the first in a series of webinars that describe how to break down the challenge into three major pieces: People, Process, and Technology. We'll discuss the industry trends around big data projects, the pitfalls with adopting a modern data strategy, and how to avoid them by building a culture of data-driven teams.
Every now and then something comes along that has the potential to change the face of business as we know it. Currently big data is being touted as that thing, and CIOs everywhere need to get a grip on what it is and how it can benefit their company.
Scandev / SDC2013 - Spoilt for Choice: Which Integration Framework to use – A...Kai Wähner
Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests.
Three integration frameworks are available in the JVM environment, which fulfil these requirements: Apache Camel, Spring Integration and Mule. They implement the well-known Enteprise Integration Patterns (EIP) and therefore offers a standardized, domain-specific language to integrate applications.
These Integration Frameworks can be used in almost every integration project within the JVM environment - no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code.
This session shows and compares the three alternatives and discusses their pros and cons. Besides, a recommendation will be given when to use a more powerful Enterprise Service Bus (ESB) instead of one of these frameworks.
Emerging Trends in Data Architecture – What’s the Next Big Thing?DATAVERSITY
With technological innovation and change occurring at an ever-increasing rate, it’s hard to keep track of what’s hype and what can provide practical value for your organization. Join this webinar to see the results of a recent DATAVERSITY survey on emerging trends in Data Architecture, along with practical commentary and advice from industry expert Donna Burbank.
Cloud Migration Strategies that Ensure Greater Value for the BusinessDenodo
Watch full webinar here: https://bit.ly/3UPJRNY
The cloud is no longer the future; the cloud is now. Companies continue to shift their workloads and data to the cloud to leverage the speed, agility, and efficient scalability the cloud offers. As a result, there is increasing attention given to cloud-based data warehouses, data lakes, and lake houses as a way for organizations to simplify managing their ever-increasing volumes of data. However, many find that these new technologies only complicate the data management process.
Join Kevin Bohan, Director of Product Marketing at Denodo as he explores the challenges faced when adopting cloud data services. His presentation will be followed by and engaging fireside chat with Rex Washburn, Head of Modern Data Platforms at Sirius. Together they will discuss practical advice to help work around these challenges.
Watch On-Demand and Learn:
- What are the obstacles faced when moving data to the cloud?
- The various options to consider to overcome these challenges.
- A recommended path forward.
Big Data initiatives should focus on outcomes first.
The value of Big Data is the potential change in outcomes. Companies should first evaluate which areas of their business and decision making are receptive to change.
Receptiveness to change dictated or directed by models, black box algorithms needs to be accepted by managers, execution staff (i.e. call these prospects and discuss x becuase the model says so).
This is a cultural change, a mind set change and a governance change. Advanced modeling must also bear the responsibility of scenario testing and multiple outcome hypothesis and simulation testing.
Business Intelligence & Data Analytics– An Architected ApproachDATAVERSITY
Business intelligence (BI) and data analytics are increasing in popularity as more organizations are looking to become more data-driven. Many tools have powerful visualization techniques that can create dynamic displays of critical information. To ensure that the data displayed on these visualizations is accurate and timely, a strong Data Architecture is needed. Join this webinar to understand how to create a robust Data Architecture for BI and data analytics that takes both business and technology needs into consideration.
From Paris Hilton to Walmart: welcome to the Big Data RevolutionWilliam Visterin
Presentatie op het CPBB Convention van BNP. De presentatie bleek een van de drukst bijgewoonde van de hele conventie. Big data op een erg toegankelijke manier.
Data Lake Architecture – Modern Strategies & ApproachesDATAVERSITY
Data Lake or Data Swamp? By now, we’ve likely all heard the comparison. Data Lake architectures have the opportunity to provide the ability to integrate vast amounts of disparate data across the organization for strategic business analytic value. But without a proper architecture and metadata management strategy in place, a Data Lake can quickly devolve into a swamp of information that is difficult to understand. This webinar will offer practical strategies to architect and manage your Data Lake in a way that optimizes its success.
DAS Slides: Best Practices in Metadata ManagementDATAVERSITY
Metadata is hotter than ever, according a number of recent DATAVERSITY surveys. More and more organizations are realizing that in order to drive business value from data, robust metadata is needed to gain the necessary context and lineage around key data assets. At the same time, industry regulations are driving the need for better transparency and understanding of information.
While metadata has been managed for decades, new strategies and approaches have been developed to support the ever-evolving data landscape, and provide more innovative ways to drive business value from metadata. This webinar will provide an overview of metadata strategies and technologies available to today’s organization, and provide insights into building successful business strategies for metadata adoption and use.
Whilst big data may represent a step forward in business intelligence and analytics, we see added value in linking and utilizing big data for business benefit. Once we bring together numerous data sources to provide a single reference point can we start to derive new value. Until then, we only risk creating new data silos.
Building a Data Strategy – Practical Steps for Aligning with Business GoalsDATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task – but it’s worth the effort. Getting your Data Strategy right can provide significant value, as data drives many of the key initiatives in today’s marketplace – from digital transformation, to marketing, to customer centricity, to population health, and more. This webinar will help demystify Data Strategy and its relationship to Data Architecture and will provide concrete, practical ways to get started.
Getting Started with Big Data for Business ManagersDatameer
Big Data has become critical to the enterprise because of the massive amount of untapped data sources, and the potential to gain new insights that were previously not possible. So, how to get started with Big Data and Hadoop becomes a question more pertinent than ever before.
Listen to leading analyst at Ovum, Tony Baer, as he discusses answers to the key questions around how to:
Approach Big Data and associated business challenges
-- Identify what types of new insights can be revealed by Big Data
-- Staff for this undertaking and implement the technology necessary to be successful
-- Take the first steps toward getting started with Big Data on Hadoop
Similar to You are not Facebook or Google? Why you should still care about Big Data and Apache Hadoop - 33rd Degree 2013 (20)
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)Kai Wähner
Decentralized finance with crypto and NFTs is a huge topic these days. It becomes a powerful combination with the coming metaverse platforms across industries. This session explores the relationship between crypto technologies and modern enterprise architecture.
I discuss how data streaming and Apache Kafka help build innovation and scalable real-time applications of a future metaverse. Let's skip the buzz (and NFT bubble) and instead review existing real-world deployments in the crypto and blockchain world powered by Kafka and its ecosystem.
Apache Kafka is the de facto standard for data streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job?
This session explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.
No matter if you think about open source Apache Kafka, a cloud service like Confluent Cloud, or another technology using the Kafka protocol like Redpanda or Pulsar, check out this slide deck.
A detailed article about this topic:
https://www.kai-waehner.de/blog/2022/01/04/when-not-to-use-apache-kafka/
Kafka for Live Commerce to Transform the Retail and Shopping MetaverseKai Wähner
Live commerce combines instant purchasing of a featured product and audience participation.
This talk explores the need for real-time data streaming with Apache Kafka between applications to enable live commerce across online stores and brick & mortar stores across regions, countries, and continents in any retail business.
The discussion covers several building blocks of a live commerce enterprise architecture, including transactional data processing, omnichannel, natural language processing, augmented reality, edge computing, and more.
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Apache Kafka vs. Cloud-native iPaaS Integration Platform MiddlewareKai Wähner
Enterprise integration is more challenging than ever before. The IT evolution requires the integration of more and more technologies. Applications are deployed across the edge, hybrid, and multi-cloud architectures. Traditional middleware such as MQ, ETL, ESB does not scale well enough or only processes data in batch instead of real-time.
This presentation explores why Apache Kafka is the new black for integration projects, how Kafka fits into the discussion around cloud-native iPaaS (Integration Platform as a Service) solutions, and why event streaming is a new software category.
A concrete real-world example shows the difference between event streaming and traditional integration platforms respectively cloud-native iPaaS.
Video Recording of this presentation:
https://www.youtube.com/watch?v=I8yZwKg_IJc&t=2842s
Blog post about this topic:
https://www.kai-waehner.de/blog/2021/11/03/apache-kafka-cloud-native-ipaas-versus-mq-etl-esb-middleware/
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
Serverless Kafka and Spark in a Multi-Cloud Lakehouse ArchitectureKai Wähner
Apache Kafka in conjunction with Apache Spark became the de facto standard for processing and analyzing data. Both frameworks are open, flexible, and scalable.
Unfortunately, the latter makes operations a challenge for many teams. Ideally, teams can use serverless SaaS offerings to focus on business logic. However, hybrid and multi-cloud scenarios require a cloud-native platform that provides automated and elastic tooling to reduce the operations burden.
This session explores different architectures to build serverless Apache Kafka and Apache Spark multi-cloud architectures across regions and continents.
We start from the analytics perspective of a data lake and explore its relation to a fully integrated data streaming layer with Kafka to build a modern data Data Lakehouse.
Real-world use cases show the joint value and explore the benefit of the "delta lake" integration.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Data Streaming with Apache Kafka in the Defence and Cybersecurity IndustryKai Wähner
Agenda:
1) Defence, Modern Warfare, and Cybersecurity in 202X
2) Data in Motion with Apache Kafka as Defence Backbone
3) Situational Awareness
4) Threat Intelligence
5) Forensics and AI / Machine Learning
6) Air-Gapped and Zero Trust Environments
7) SIEM / SOAR Modernization
Technologies discussed in the presentation include Apache Kafka, Kafka Streams, kqlDB, Kafka Connect, Elasticsearch, Splunk, IBM QRadar, Zeek, Netflow, PCAP, TensorFlow, AWS, Azure, GCP, Sigma, Confluent Cloud,
Real-World Deployments of Data Streaming with Apache Kafka across the Healthcare Value Chain using open source and cloud-native technologies and serverless SaaS:
1) Legacy Modernization and Hybrid Cloud: Optum (UnitedHealth Group, Centene, Bayer)
2) Streaming ETL (Bayer, Babylon Health)
3) Real-time Analytics (Cerner, Celmatix, CDC/Centers for Disease Control and Prevention)
4) Machine Learning and Data Science (Recursion, Humana)
5) Open API and Omnichannel (Care.com, Invitae)
The Rise of Data in Motion in the Healthcare Industry - Use Cases, Architectures and Examples powered by Apache Kafka.
Use Cases for Data in Motion in the Healthcare Industry:
- Know Your Patient (= “Customer 360”)
- Operations (Healthcare 4.0 including Drug R&D, Patient Care, etc.)
- IT Perspective (Cybersecurity, Mainframe Offload, Hybrid Cloud, Streaming ETL, etc)
Real-world examples include Covid-19 Electronic Lab Reporting, Cerner, Optum, Centene, Humana, Invitae, Bayer, Celmatix, Care.com.
Apache Kafka for Real-time Supply Chainin the Food and Retail IndustryKai Wähner
Use Cases, Architectures, and Real-World Examples for data in motion and real-time event streaming powered by Apache Kafka across the supply chain and logistics. Case studies and deployments include Baader, Walmart, Migros, Albertsons, Domino's Pizza, Instacart, Grab, Royal Caribbean, and more.
Kafka for Real-Time Replication between Edge and Hybrid CloudKai Wähner
Not all workloads allow cloud computing. Low latency, cybersecurity, and cost-efficiency require a suitable combination of edge computing and cloud integration.
This session explores architectures and design patterns for software and hardware considerations to deploy hybrid data streaming with Apache Kafka anywhere. A live demo shows data synchronization from the edge to the public cloud across continents with Kafka on Hivecell and Confluent Cloud.
Apache Kafka for Predictive Maintenance in Industrial IoT / Industry 4.0Kai Wähner
The manufacturing industry is moving away from just selling machinery, devices, and other hardware. Software and services increase revenue and margins. Equipment-as-a-Service (EaaS) even outsources the maintenance to the vendor.
This paradigm shift is only possible with reliable and scalable real-time data processing leveraging an event streaming platform such as Apache Kafka. This talk explores how Kafka-native Condition Monitoring and Predictive Maintenance help with this innovation.
More details:
https://www.kai-waehner.de/blog/2021/10/25/apache-kafka-condition-monitoring-predictive-maintenance-industrial-iot-digital-twin/
Video recording:
https://youtu.be/tfOuN5KeI9w
Apache Kafka Landscape for Automotive and ManufacturingKai Wähner
Today, in 2022, Apache Kafka is the central nervous system of many applications in various areas related to the automotive and manufacturing industry for processing analytical and transactional data in motion across edge, hybrid, and multi-cloud deployments.
This presentation explores the automotive event streaming landscape, including connected vehicles, smart manufacturing, supply chain optimization, aftersales, mobility services, and innovative new business models.
Afterwards, many real-world examples are shown from companies such as Audi, BMW, Porsche, Tesla, Uber, Grab, and FREENOW.
More detail in the blog post:
https://www.kai-waehner.de/blog/2022/01/12/apache-kafka-landscape-for-automotive-and-manufacturing/
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
The Top 5 Apache Kafka Use Cases and Architectures in 2022Kai Wähner
I see the following topics coming up more regularly in conversations with customers, prospects, and the broader Kafka community across the globe:
Kappa Architecture: Kappa goes mainstream to replace Lambda and Batch pipelines (that does not mean that there is no batch processing anymore). Examples: Kafka-powered Kappa architectures from Uber, Disney, Shopify, and Twitter.
Hyper-personalized Omnichannel: Retail and customer communication across online and offline channels becomes the new black, including context-specific upselling, recommendations, and location-based services. Examples: Omnichannel Retail and Customer 360 in Real-Time with Apache Kafka.
Multi-Cloud Deployments: Business units and IT infrastructures span across regions, continents, and cloud providers. Linking clusters for bi-directional replication of data in real-time becomes crucial for many business models. Examples: Global Kafka deployments.
Edge Analytics: Low latency requirements, cost efficiency, or security requirements enforce the deployment of (some) event streaming use cases at the far edge (i.e., outside a data center), for instance, for predictive maintenance and quality assurance on the shop floor level in smart factories. Examples: Edge analytics with Kafka.
Real-time Cybersecurity: Situational awareness and threat intelligence need to process massive data in real-time to defend against cyberattacks successfully. The many successful ransomware attacks across the globe in 2021 were a warning for most CIOs. Examples: Cybersecurity for situational awareness and threat intelligence in real-time.
Event Streaming CTO Roundtable for Cloud-native Kafka ArchitecturesKai Wähner
Technical thought leadership presentation to discuss how leading organizations move to real-time architecture to support business growth and enhance customer experience. This is a forum to discuss use cases with your peers to understand how other digital-native companies are utilizing data in motion to drive competitive advantage.
Agenda:
- Data in Motion with Event Streaming and Apache Kafka
- Streaming ETL Pipelines
- IT Modernisation and Hybrid Multi-Cloud
- Customer Experience and Customer 360
- IoT and Big Data Processing
- Machine Learning and Analytics
Apache Kafka in the Public Sector (Government, National Security, Citizen Ser...Kai Wähner
The Rise of Data in Motion in the Public Sector powered by event streaming with Apache Kafka.
Citizen Services:
- Health services, e.g. hospital modernization, track & trace - Covid distance control
- Public administration - reduce bureaucracy, data democratization across government departments
- eGovernment - Efficient and digital citizen engagement, e.g. personal ID application process
Smart City
- Smart driving, parking, buildings, environment
Waste management
- Open exchange – e.g. mobility services (1st and 3rd party)
Energy
- Smart grid and utilities infrastructure (energy distribution, smart home, smart meters, smart water, etc.)
- National Security
Law enforcement, surveillance, police/interior security data exchange
- Defense and military (border control, intelligent solider)
Cybersecurity for situational awareness and threat intelligence
Telco 4.0 - Payment and FinServ Integration for Data in Motion with 5G and Ap...Kai Wähner
The Era of Telco 4.0: Embracing Digital Transformation with Data in Motion. Learn about Payment and FinServ Integration for Data in Motion with 5G and Apache Kafka.
1) The rise of Telco 4.0 and the future forward
2) Data in Motion in the Telco industry
3) Real-world Fintech and Payment examples powered by Data in Motion
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.