Modern Thinking: Cómo el Big Data y Cognitive están cambiando la estrategia de Marketing
Por: Ismael Yuste, Strategic Cloud Engineer Google Cloud
Presentación: Introducción a las soluciones Big Data de Google
Watch this recorded webcast and listen to Infochimps CSO and Co-Founder, Dhruv Bansal, and Think Big Analytics Principal Architect, Douglas Moore, share successful use cases and recommendations for building real-time predictive analytics in your enterprise.
Alois Reitbauer - Big Data made it's way into everyday analytics processes. Artificial Intelligence is also becoming more mainstream in automated data analysis and interpretation. The next challenge we have to solve is how to integrate all the data and findings into our natural work environments. We have been working on a system to enabling humans to interact via chat applications like Slack with data and an Artificial Intelligence analysis layer. The talk covers the whole spectrum from designing the system, key considerations in implementing the system and lessons learned from using it in the wild.
Deep Learning Image Processing Applications in the EnterpriseGanesan Narayanasamy
The presentation has many use cases covering the following Image classification: "The process of identifying and detecting an object or a feature in a digital image or video," the report states. In retail, deep learning models "quickly scan and analyze in-store imagery to intuitively determine inventory movement."
Voice recognition: "The ability to receive and interpret dictation or to understand and carry out spoken commands. Models are able to convert captured voice commands to text and then use natural language processing to understand what is being said and in what context." In transportation, deep learning "uses voice commands to enable drivers to make phone calls and adjust internal controls - all without taking their hands off the steering wheel."
Anomaly detection: "Deep learning technique strives to recognize abnormal patterns which don't match the behaviors expected for a particular system, out of millions of different transactions. These applications can lead to the discovery of an attack on financial networks, fraud detection in insurance filings or credit card purchases, even isolating sensor data in industrial facilities signifying a safety issue."
Recommendation engines: "Analyze user actions in order to provide recommendations based on user behavior."
Sentiment analysis: "Leverages deep learning-heavy techniques such as natural language processing, text analysis, and computational linguistics to gain clear insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies."
Video analysis: "Process and evaluate vast streams of video footage for a range of tasks including threat detection, which can be used in airport security, banks, and sporting events."
New Trend - Big Data Analytics as a service
The combination of ‘data analysis’ and 'big data-open source-cloud computing' opens up a new universe of opportunities at many levels and in many places.
Watch this recorded webcast and listen to Infochimps CSO and Co-Founder, Dhruv Bansal, and Think Big Analytics Principal Architect, Douglas Moore, share successful use cases and recommendations for building real-time predictive analytics in your enterprise.
Alois Reitbauer - Big Data made it's way into everyday analytics processes. Artificial Intelligence is also becoming more mainstream in automated data analysis and interpretation. The next challenge we have to solve is how to integrate all the data and findings into our natural work environments. We have been working on a system to enabling humans to interact via chat applications like Slack with data and an Artificial Intelligence analysis layer. The talk covers the whole spectrum from designing the system, key considerations in implementing the system and lessons learned from using it in the wild.
Deep Learning Image Processing Applications in the EnterpriseGanesan Narayanasamy
The presentation has many use cases covering the following Image classification: "The process of identifying and detecting an object or a feature in a digital image or video," the report states. In retail, deep learning models "quickly scan and analyze in-store imagery to intuitively determine inventory movement."
Voice recognition: "The ability to receive and interpret dictation or to understand and carry out spoken commands. Models are able to convert captured voice commands to text and then use natural language processing to understand what is being said and in what context." In transportation, deep learning "uses voice commands to enable drivers to make phone calls and adjust internal controls - all without taking their hands off the steering wheel."
Anomaly detection: "Deep learning technique strives to recognize abnormal patterns which don't match the behaviors expected for a particular system, out of millions of different transactions. These applications can lead to the discovery of an attack on financial networks, fraud detection in insurance filings or credit card purchases, even isolating sensor data in industrial facilities signifying a safety issue."
Recommendation engines: "Analyze user actions in order to provide recommendations based on user behavior."
Sentiment analysis: "Leverages deep learning-heavy techniques such as natural language processing, text analysis, and computational linguistics to gain clear insight into customer opinion, understanding of consumer sentiment, and measuring the impact of marketing strategies."
Video analysis: "Process and evaluate vast streams of video footage for a range of tasks including threat detection, which can be used in airport security, banks, and sporting events."
New Trend - Big Data Analytics as a service
The combination of ‘data analysis’ and 'big data-open source-cloud computing' opens up a new universe of opportunities at many levels and in many places.
H2O Machine Learning with KNIME Analytics Platform - Christian Dietz - H2O AI...Sri Ambati
This talk was recorded in London on October 30, 2018.
KNIME Analytics Platform is an easy to use and comprehensive open source data integration, analysis, and exploration platform, enabling data scientists to visually compose end to end data analysis workflows. The over 2,000 available modules ("nodes") cover each step of the analysis workflow, including blending heterogeneous data types, data transformation, wrangling and cleansing, advanced data visualization, or model training and deployment.
Many of these nodes are provided through open source integrations (why reinvent the wheel?). This provides seamless access to large open source projects such as Keras and Tensorflow for deep learning, Apache Spark for big data processing, Python and R for scripting, and more. These integrations can be used in combination with other KNIME nodes meaning that data scientists can freely select from a vast variety of options when tackling an analysis problem.
The integration of H2O in KNIME offers an extensive number of nodes and encapsulating functionalities of the H2O open source machine learning libraries, making it easy to use H2O algorithms from a KNIME workflow without touching any code - each of the H2O nodes looks and feels just like a normal KNIME node - and the data scientist benefits from the high performance libraries and proven quality of H2O during execution. For prototyping these algorithms are executed locally, however training and deployment can easily be scaled up using a Sparkling Water cluster.
In our talk we give a short introduction to KNIME Analytics Platform and then demonstrate how data scientists benefit from using KNIME Analytics Platform and H2O Machine Learning in combination by using a real world analysis example.
Bio: Christian received a Master’s degree in Computer Science from the University of Konstanz. Having gained experience as a research software engineer at the University of Konstanz, where he developed frameworks and libraries in the fields of bioimage analysis and machine learning, Christian moved on to become a software engineer at KNIME. He now focuses on developing new functionalities and extensions for KNIME Analytics Platform. Some of his recent projects include deep learning integrations built upon Keras and Tensorflow, extensions for image analysis and active learning, and the integration of H2O Machine Learning and H2O Sparkling Water in KNIME Analytics Platform.
Big data expert and Infochimps CEO, Jim Kaskade presents the Infinite Monkey Theorem at CloudCon Expo. He provides an energetic, inspiring, and practical perspective on why Big Data is disrupting. It’s more than historic data analyzed on Hadoop. It’s also more than real-time streaming data stored and queried using NoSQL. Learn more at www.Infochimps.com
Big Data Paris - A Modern Enterprise ArchitectureMongoDB
Depuis les années 1980, le volume de données produit et le risque lié à ces données ont littéralement explosé. 90% des données existantes aujourd’hui ont été créé ces 2 dernières années, dont 80% sont non structurées. Avec plus d’utilisateurs et le besoin de disponibilité permanent, les risques sont beaucoup plus élevés.
Quels sont les paramètres de bases de données qu’un décideur doit prendre en compte pour déployer ses applications innovantes?
Site | https://www.infoq.com/qconai2018/
Youtube | https://www.youtube.com/watch?v=2h0biIli2F4&t=19s
At PayPal, data engineers, analysts and data scientists work with a variety of datasources (Messaging, NoSQL, RDBMS, Documents, TSDB), compute engines (Spark, Flink, Beam, Hive), languages (Scala, Python, SQL) and execution models (stream, batch, interactive).
Due to this complex matrix of technologies and thousands of datasets, engineers spend considerable time learning about different data sources, formats, programming models, APIs, optimizations, etc. which impacts time-to-market (TTM). To solve this problem and to make product development more effective, PayPal Data Platform developed "Gimel", a unified analytics data platform which provides access to any storage through a single unified data API and SQL, that are powered by a centralized data catalog.
In this session, we will introduce you to the various components of Gimel - Compute Platform, Data API, PCatalog, GSQL and Notebooks. We will provide a demo depicting how Gimel reduces TTM by helping our engineers write a single line of code to access any storage without knowing the complexity behind the scenes.
WJAX 2013 Slides online: Big Data beyond Apache Hadoop - How to integrate ALL...Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data. Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives. This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
IoT Analytics at Google Scale with James Chittenden: Using PubSub Dataflow, and BigQuery to Capture Millions of Connected Devices
There is the potential for 50 billion connected devices by 2020. Google Cloud Platform gives you the tools to scale connections, gather and make sense of data, and provide the reliable customer experiences that hardware devices require. Google’s Cloud Platform provides the infrastructure to handle streams of data fed from millions of intelligent devices.
In this meetup, we'll explore one of the world's largest appliance manufacturer's IoT architecture along with Google's partner Archipelago, and will drill into how they are leveraging Google's massive infrastructure in their solution. We'll explore what Google provides for IoT, including Pub/Sub for messaging, Dataflow for data processing, BigQuery for large scale analytics as well as best practices for real time stream processing accounting for ingest, processing, storage and analysis of hundreds of millions of events per hour.
Building Identity Graph at Scale for Programmatic Media Buying Using Apache S...Databricks
The proliferation of digital channels has made it mandatory for marketers to understand an individual across multiple touchpoints. In order to develop market effectiveness, marketers need have a pretty good sense of its consumer’s identity so that it can reach him via mobile device, desktop or a big TV screen on living room. Examples of such identity tokens include cookies, app IDs etc.A consumer can use multiple devices at the same time and so the same consumer should not be treated as different people in the advertising space. The idea of identity resolution comes with this mission and goal to have an omnichannel view of a consumer.
In this slidedeck, Infochimps Director of Product, Tim Gasper, discusses how Infochimps tackles business problems for customers by deploying a comprehensive Big Data infrastructure in days; sometimes in just hours. Tim unlocks how Infochimps is now taking that same aggressive approach to deliver faster time to value by helping customers develop analytic applications with impeccable speed.
Achieving Business Value by Fusing Hadoop and Corporate DataInside Analysis
The Briefing Room with Richard Hackathorn and Teradata
Live Webcast March 25, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=e7254708146d056339a0974f097f569b2
Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful analytic solutions require a fusion of all relevant data, big and small, which has proven challenging for many companies. By allowing business analysts to quickly access data wherever it rests, success factors shift to focus on three key aspects: 1) business objectives, 2) organizational workflow, and 3) data placement.
Register for this Special Edition of The Briefing Room to hear veteran Analyst Richard Hackathorn as he provides details from his recent research report focused on success stories using Teradata QueryGrid. Examples of use cases described will include:
Joining sensor data in Hadoop with data warehouse labor schedules in seconds
How bridging corporate cultures and systems creates new business opportunities
The 360 view of customer journeys using weblogs in Hadoop via BI tools
How can you put the data where you want and query it however you want
Virtualizing Hadoop data with Teradata QueryGrid
Visit InsideAnalysis.com for more information.
Single View of Well, Production and AssetsJohn Archer
SINGLE VIEW OF WELL, PRODUCTION AND ASSETS
Deliver a complete view of G&G, Well Header, Volumes, transactional data
Reduce Data Movement
Reduce Load on Data sources with intelligent caching
Aggregated single view of complex and legacy data sources
Think of big data as all data, no matter what the volume, velocity, or variety. The simple truth is a traditional on-prem data warehouse will not handle big data. So what is Microsoft’s strategy for building a big data solution? And why is it best to have this solution in the cloud? That is what this presentation will cover. Be prepared to discover all the various Microsoft technologies and products from collecting data, transforming it, storing it, to visualizing it. My goal is to help you not only understand each product but understand how they all fit together, so you can be the hero who builds your companies big data solution.
Learn about IBM's Hadoop offering called BigInsights. We will look at the new features in version 4 (including a discussion on the Open Data Platform), review a couple of customer examples, talk about the overall offering and differentiators, and then provide a brief demonstration on how to get started quickly by creating a new cloud instance, uploading data, and generating a visualization using the built-in spreadsheet tooling called BigSheets.
H2O Machine Learning with KNIME Analytics Platform - Christian Dietz - H2O AI...Sri Ambati
This talk was recorded in London on October 30, 2018.
KNIME Analytics Platform is an easy to use and comprehensive open source data integration, analysis, and exploration platform, enabling data scientists to visually compose end to end data analysis workflows. The over 2,000 available modules ("nodes") cover each step of the analysis workflow, including blending heterogeneous data types, data transformation, wrangling and cleansing, advanced data visualization, or model training and deployment.
Many of these nodes are provided through open source integrations (why reinvent the wheel?). This provides seamless access to large open source projects such as Keras and Tensorflow for deep learning, Apache Spark for big data processing, Python and R for scripting, and more. These integrations can be used in combination with other KNIME nodes meaning that data scientists can freely select from a vast variety of options when tackling an analysis problem.
The integration of H2O in KNIME offers an extensive number of nodes and encapsulating functionalities of the H2O open source machine learning libraries, making it easy to use H2O algorithms from a KNIME workflow without touching any code - each of the H2O nodes looks and feels just like a normal KNIME node - and the data scientist benefits from the high performance libraries and proven quality of H2O during execution. For prototyping these algorithms are executed locally, however training and deployment can easily be scaled up using a Sparkling Water cluster.
In our talk we give a short introduction to KNIME Analytics Platform and then demonstrate how data scientists benefit from using KNIME Analytics Platform and H2O Machine Learning in combination by using a real world analysis example.
Bio: Christian received a Master’s degree in Computer Science from the University of Konstanz. Having gained experience as a research software engineer at the University of Konstanz, where he developed frameworks and libraries in the fields of bioimage analysis and machine learning, Christian moved on to become a software engineer at KNIME. He now focuses on developing new functionalities and extensions for KNIME Analytics Platform. Some of his recent projects include deep learning integrations built upon Keras and Tensorflow, extensions for image analysis and active learning, and the integration of H2O Machine Learning and H2O Sparkling Water in KNIME Analytics Platform.
Big data expert and Infochimps CEO, Jim Kaskade presents the Infinite Monkey Theorem at CloudCon Expo. He provides an energetic, inspiring, and practical perspective on why Big Data is disrupting. It’s more than historic data analyzed on Hadoop. It’s also more than real-time streaming data stored and queried using NoSQL. Learn more at www.Infochimps.com
Big Data Paris - A Modern Enterprise ArchitectureMongoDB
Depuis les années 1980, le volume de données produit et le risque lié à ces données ont littéralement explosé. 90% des données existantes aujourd’hui ont été créé ces 2 dernières années, dont 80% sont non structurées. Avec plus d’utilisateurs et le besoin de disponibilité permanent, les risques sont beaucoup plus élevés.
Quels sont les paramètres de bases de données qu’un décideur doit prendre en compte pour déployer ses applications innovantes?
Site | https://www.infoq.com/qconai2018/
Youtube | https://www.youtube.com/watch?v=2h0biIli2F4&t=19s
At PayPal, data engineers, analysts and data scientists work with a variety of datasources (Messaging, NoSQL, RDBMS, Documents, TSDB), compute engines (Spark, Flink, Beam, Hive), languages (Scala, Python, SQL) and execution models (stream, batch, interactive).
Due to this complex matrix of technologies and thousands of datasets, engineers spend considerable time learning about different data sources, formats, programming models, APIs, optimizations, etc. which impacts time-to-market (TTM). To solve this problem and to make product development more effective, PayPal Data Platform developed "Gimel", a unified analytics data platform which provides access to any storage through a single unified data API and SQL, that are powered by a centralized data catalog.
In this session, we will introduce you to the various components of Gimel - Compute Platform, Data API, PCatalog, GSQL and Notebooks. We will provide a demo depicting how Gimel reduces TTM by helping our engineers write a single line of code to access any storage without knowing the complexity behind the scenes.
WJAX 2013 Slides online: Big Data beyond Apache Hadoop - How to integrate ALL...Kai Wähner
Big data represents a significant paradigm shift in enterprise technology. Big data radically changes the nature of the data management profession as it introduces new concerns about the volume, velocity and variety of corporate data. Apache Hadoop is the open source defacto standard for implementing big data solutions on the Java platform. Hadoop consists of its kernel, MapReduce, and the Hadoop Distributed Filesystem (HDFS). A challenging task is to send all data to Hadoop for processing and storage (and then get it back to your application later), because in practice data comes from many different applications (SAP, Salesforce, Siebel, etc.) and databases (File, SQL, NoSQL), uses different technologies and concepts for communication (e.g. HTTP, FTP, RMI, JMS), and consists of different data formats using CSV, XML, binary data, or other alternatives. This session shows different open source frameworks and products to solve this challenging task. Learn how to use every thinkable data with Hadoop – without plenty of complex or redundant boilerplate code.
IoT Analytics at Google Scale with James Chittenden: Using PubSub Dataflow, and BigQuery to Capture Millions of Connected Devices
There is the potential for 50 billion connected devices by 2020. Google Cloud Platform gives you the tools to scale connections, gather and make sense of data, and provide the reliable customer experiences that hardware devices require. Google’s Cloud Platform provides the infrastructure to handle streams of data fed from millions of intelligent devices.
In this meetup, we'll explore one of the world's largest appliance manufacturer's IoT architecture along with Google's partner Archipelago, and will drill into how they are leveraging Google's massive infrastructure in their solution. We'll explore what Google provides for IoT, including Pub/Sub for messaging, Dataflow for data processing, BigQuery for large scale analytics as well as best practices for real time stream processing accounting for ingest, processing, storage and analysis of hundreds of millions of events per hour.
Building Identity Graph at Scale for Programmatic Media Buying Using Apache S...Databricks
The proliferation of digital channels has made it mandatory for marketers to understand an individual across multiple touchpoints. In order to develop market effectiveness, marketers need have a pretty good sense of its consumer’s identity so that it can reach him via mobile device, desktop or a big TV screen on living room. Examples of such identity tokens include cookies, app IDs etc.A consumer can use multiple devices at the same time and so the same consumer should not be treated as different people in the advertising space. The idea of identity resolution comes with this mission and goal to have an omnichannel view of a consumer.
In this slidedeck, Infochimps Director of Product, Tim Gasper, discusses how Infochimps tackles business problems for customers by deploying a comprehensive Big Data infrastructure in days; sometimes in just hours. Tim unlocks how Infochimps is now taking that same aggressive approach to deliver faster time to value by helping customers develop analytic applications with impeccable speed.
Achieving Business Value by Fusing Hadoop and Corporate DataInside Analysis
The Briefing Room with Richard Hackathorn and Teradata
Live Webcast March 25, 2015
Watch the Archive: https://bloorgroup.webex.com/bloorgroup/onstage/g.php?MTID=e7254708146d056339a0974f097f569b2
Hadoop data lakes are emerging as peers to corporate data warehouses. However, successful analytic solutions require a fusion of all relevant data, big and small, which has proven challenging for many companies. By allowing business analysts to quickly access data wherever it rests, success factors shift to focus on three key aspects: 1) business objectives, 2) organizational workflow, and 3) data placement.
Register for this Special Edition of The Briefing Room to hear veteran Analyst Richard Hackathorn as he provides details from his recent research report focused on success stories using Teradata QueryGrid. Examples of use cases described will include:
Joining sensor data in Hadoop with data warehouse labor schedules in seconds
How bridging corporate cultures and systems creates new business opportunities
The 360 view of customer journeys using weblogs in Hadoop via BI tools
How can you put the data where you want and query it however you want
Virtualizing Hadoop data with Teradata QueryGrid
Visit InsideAnalysis.com for more information.
Single View of Well, Production and AssetsJohn Archer
SINGLE VIEW OF WELL, PRODUCTION AND ASSETS
Deliver a complete view of G&G, Well Header, Volumes, transactional data
Reduce Data Movement
Reduce Load on Data sources with intelligent caching
Aggregated single view of complex and legacy data sources
Think of big data as all data, no matter what the volume, velocity, or variety. The simple truth is a traditional on-prem data warehouse will not handle big data. So what is Microsoft’s strategy for building a big data solution? And why is it best to have this solution in the cloud? That is what this presentation will cover. Be prepared to discover all the various Microsoft technologies and products from collecting data, transforming it, storing it, to visualizing it. My goal is to help you not only understand each product but understand how they all fit together, so you can be the hero who builds your companies big data solution.
Learn about IBM's Hadoop offering called BigInsights. We will look at the new features in version 4 (including a discussion on the Open Data Platform), review a couple of customer examples, talk about the overall offering and differentiators, and then provide a brief demonstration on how to get started quickly by creating a new cloud instance, uploading data, and generating a visualization using the built-in spreadsheet tooling called BigSheets.
Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.
Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.
What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:
- Product mindset and how balanced teams can reduce internal friction
- Creating data as a product to align with cloud-native application architectures, like microservices and serverless
- Getting started bringing lean principles into your data organization
- Balancing data usability with data protection, governance, and security
Presenter : Dormain Drewitz, Pivotal & Mike Koleno, Solstice
Entrepreneurship Tips With HTML5 & App Engine Startup Weekend (June 2012)Ido Green
My talk in Startup Weekend 2012 during Google I/O. It cover, startup life tips, modern web apps and how to leverage Google cloud (specific App Engine).
Connecta Event: Big Query och dataanalys med Google Cloud PlatformConnectaDigital
Avancerad dataanalys och ”big data” har under de senaste åren klättrat på trendlistorna och är nu ett av de mest prioriterade områdena i utvecklingen av nya tjänster och produkter för ledarföretag i det digitala landskapet.
Informationen som byggs upp i systemen när kundmötena digitaliseras har visat sig vara guld värt. Här finns allt vi behöver veta för att göra våra affärer mer effektiva.
Sedan sommaren 2013 har Connecta tillsammans med Google ett etablerat samarbete för att hjälpa våra kunder med övergången till moln-tjänster för bland annat avancerad dataanalys. För att göra oss själva redo att hjälpa våra kunder har vi under ett antal år utvecklat såväl kunskaper som skaffat oss erfarenheter kring Googles olika moln-produkter, som exempelvis ”Big Query”.
Big Query är ett molnbaserat analysverktyg och en del av Google Cloud Platform. Big Query gör det möjligt att ställa snabba frågor mot enorma dataset på bara någon sekund. Big Query och Google Cloud Platform erbjuder färdiga lösningar för att sätta upp och underhålla en infrastruktur som med enkla medel gör allt detta möjligt.
På Connecta Digital Consultings tredje event för våren introducerade vi våra kunder och partners i koncepten dataanalys och Big Query.
Under eventet berördes följande punkter:
- Big Data och Business Intelligence (BI)
- “The Google Big Data tools” – framgångsfaktorer och hur man kommer igång
- Google Cloud Platform och hur man genomför en framgångsrik molnsatsning
Vi presenterade case och berättade om viktiga lärdomar vi dragit i samarbetet med Google och våra kunder.
Google Cloud Data Platform - Why Google for Data Analysis?Andreas Raible
Introduction to our Data Platform from capture, processing, analysis and exploration.
The Google Cloud Platform products are based on our internal systems which are powering Google AdWords, Search, YouTube and our leading research in the field of real-time data analysis.
You can get access ($300 for 60 days) to our free trial through google.com/cloud
Google Cloud Platform (GCP) is a suite of cloud computing services provided by Google. It offers a wide range of services including computing power, storage, networking, machine learning, and data analytics. GCP allows users to build, deploy, and scale applications and websites on Google's infrastructure. It provides flexibility, scalability, and reliability for businesses of all sizes, from startups to enterprises. Some of the key services offered by GCP include Compute Engine, App Engine, Kubernetes Engine, Cloud Storage, BigQuery, and TensorFlow for machine learning tasks. GCP also provides tools for managing and monitoring applications, as well as security features to protect data and applications in the cloud. Overall, Google Cloud Platform is a comprehensive solution for businesses looking to leverage cloud computing technology for their applications and services.
Top Trends in Building Data Lakes for Machine Learning and AI Holden Ackerman
Presentation by Ashish Thusoo, Co-Founder & CEO at Qubole, on exploring the big data industry trends in moving from data warehouses to cloud-based data lakes.This presentation will cover how companies today are seeing a significant rise in the success of their big data projects by moving to the cloud to iteratively build more cost-effective data pipelines and new products with ML and AI.
Uncovering how services like AWS, Google, Oracle, and Microsoft Azure provide the storage and compute infrastructure to build self-service data platforms that can enable all teams and new products to scale iteratively.
ICP for Data- Enterprise platform for AI, ML and Data ScienceKaran Sachdeva
IBM Cloud Private for Data, an ultimate platform for all AI, ML and Data Science workloads. Integrated analytics platform based on Containers and micro services. Works with Kubernetes and dockers, even with Redhat openshift. Delivers the variety of business use cases in all industries- FS, Telco, Retail, Manufacturing etc
The business analytics marketplace is experiencing a challenge as classic BI tools meet up with evolving big data technologies, in particular Hadoop. We explore how IBM works to meet this challenge, providing a big picture perspective of their big data offerings around Hadoop, its open data platform and BigInsights.
Democratizing AI/ML with GCP - Abishay Rao (Google) at GoDataFest 2019GoDataDriven
Every company today is talking about AI/ML, but when most companies talk about AI/ML in their transformation journey, you hear terms like Proof of Concept, Feasibility Study, Pilot, A/B Test. We are at the peak of AI's hype, but only 12% of enterprises have deployed AI in production. Google aims to make big data processing available for everyone, the possiblities of Big Query ML are endless: Marketing, retail, industrial and IoT, media, gaming, and so fort.
SendGrid Improves Email Delivery with Hybrid Data WarehousingAmazon Web Services
When you received your Uber ‘Tuesday Evening Ride Receipt’ or Spotify’s ‘This Week’s New Music’ email, did you think about how they got there?
SendGrid’s reliable email platform delivers each month over 20 Billion transactional and marketing emails on behalf of many of your favorite brands, including Uber, Airbnb, Spotify, Foursquare and NextDoor.
SendGrid was looking to evolve its data warehouse architecture in order to improve decision making and optimize customer experience. They needed a scalable and reliable architecture that would allow them to move nimbly and efficiently with a relatively small IT organization, while supporting the needs of both business and technical users at SendGrid.
SendGrid’s Director of Enterprise Data Operations will be joining architects from Amazon Web Services (AWS) and Informatica to discuss SendGrid’s journey to a hybrid cloud architecture and how a hybrid data warehousing solution is optimized to support SendGrid’s analytics initiative. Speakers will also review common technologies and use cases being deployed in hybrid cloud today, common data management challenges in hybrid cloud and best practices for addressing these challenges.
Join us to learn:
• How to evolve to a hybrid data warehouse with Amazon Redshift for scalability, agility and cost efficiency with minimal IT resources
• Hybrid cloud data management use cases
• Best practices for addressing hybrid cloud data management challenges
Applying BigQuery ML on e-commerce data analyticsMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. We are going to demonstrate common marketing Machine Learning use cases we do at REEA.net to build, train, eval and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases:
Customer Segmentation
Customer Lifetime Value (LTV) prediction
Conversion/Purchase prediction
The audience will get first hand experience how to write CREATE MODEL sql syntax to build machine learning models such as:
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor
Start Getting Your Feet Wet in Open Source Machine and Deep Learning Ian Gomez
At H2O.ai we see a world where all software will incorporate AI, and we’re focused on bringing AI to business through software. H2O.ai is the maker behind H2O, the leading open source machine and deep learning platform for smarter applications and data products. H2O operationalizes data science by developing and deploying algorithms and models for R, Python and the Sparkling Water API for Spark.
In this webinar, you will learn about the scalable H2O core platform and the distributed algorithms it supports. H2O integrates seamlessly with the R and the Python environments. We will show you how to leverage the power of H2O algorithms in R, Python and H2O Flow interface. Come with an open mind and some high level knowledge of machine learning, and you will take away a stream of knowledge for your next ML/DL project.
Amy Wang is a math hacker at H2O, as well as the Sales Engineering Lead. She graduated from Hunter College in NYC with a Masters in Applied Mathematics and Statistics with a heavy concentration on numerical analysis and financial mathematics.
Her interest in applicable math eventually lead her to big data and finding the appropriate mediums for data analysis.
Desmond is a Senior Director of Marketing at H2O.ai. In his 15+ years of career in Enterprise Software, Desmond worked in Distributed Systems, Storage, Virtualization, MPP databases, Streaming Analytics Platform, and most recently Machine Learning. He obtained his Master’s degree in Computer Science from Stanford University and MBA degree from UC Berkeley, Haas School of Business.
Building Intelligent Apps with MongoDB and Google Cloud - Jane FineMongoDB
Intelligent apps are emerging as the next frontier in analytics and application development. Learn how to build intelligent apps on MongoDB powered by Google Cloud with TensorFlow for machine learning and DialogFlow for artificial intelligence. Get your developers and data scientists to finally work together to build applications that understand your customer, automate their tasks, and provide knowledge and decision support.
Similar to Modern Thinking área digital MSKM 21/09/2017 (20)
The digital marketing industry is changing faster than ever and those who don’t adapt with the times are losing market share. Where should marketers be focusing their efforts? What strategies are the experts seeing get the best results? Get up-to-speed with the latest industry insights, trends and predictions for the future in this panel discussion with some leading digital marketing experts.
Digital Commerce Lecture for Advanced Digital & Social Media Strategy at UCLA...Valters Lauzums
E-commerce in 2024 is characterized by a dynamic blend of opportunities and significant challenges. Supply chain disruptions and inventory shortages are critical issues, leading to increased shipping delays and rising costs, which impact timely delivery and squeeze profit margins. Efficient logistics management is essential, yet it is often hampered by these external factors. Payment processing, while needing to ensure security and user convenience, grapples with preventing fraud and integrating diverse payment methods, adding another layer of complexity. Furthermore, fulfillment operations require a streamlined approach to handle volume spikes and maintain accuracy in order picking, packing, and shipping, all while meeting customers' heightened expectations for faster delivery times.
Amid these operational challenges, customer data has emerged as an important strategy. By focusing on personalization and enhancing customer experience from historical behavior, businesses can deliver improved website and brand experienced, better product recommendations, optimal promotions, and content to meet individual preferences. Better data analytics can also help in effectively creating marketing campaigns, improving customer retention, and driving product development and inventory management.
Innovative formats such as social commerce and live shopping are beginning to impact the digital commerce landscape, offering new ways to engage with customers and drive sales, and may provide opportunity for brands that have been priced out or seen a downturn with post-pandemic shopping behavior. Social commerce integrates shopping experiences directly into social media platforms, tapping into the massive user bases of these networks to increase reach and engagement. Live shopping, on the other hand, combines entertainment and real-time interaction, providing a dynamic platform for showcasing products and encouraging immediate purchases. These innovations not only enhance customer engagement but also provide valuable data for businesses to refine their strategies and deliver superior shopping experiences.
The e-commerce sector is evolving rapidly, and businesses that effectively manage operational challenges and implement innovative strategies are best positioned for long-term success.
The What, Why & How of 3D and AR in Digital CommercePushON Ltd
Vladimir Mulhem has over 20 years of experience in commercialising cutting edge creative technology across construction, marketing and retail.
Previously the founder and Tech and Innovation Director of Creative Content Works working with the likes of Next, John Lewis and JD Sport, he now helps retailers, brands and agencies solve challenges of applying the emerging technologies 3D, AR, VR and Gen AI to real-world problems.
In this webinar, Vladimir will be covering the following topics:
Applications of 3D and AR in Digital Commerce,
Benefits of 3D and AR,
Tools to create, manage and publish 3D and AR in Digital Commerce.
The session includes a brief history of the evolution of search before diving into the roles technology, content, and links play in developing a powerful SEO strategy in a world of Generative AI and social search. Discover how to optimize for TikTok searches, Google's Gemini, and Search Generative Experience while developing a powerful arsenal of tools and templates to help maximize the effectiveness of your SEO initiatives.
Key Takeaways:
Understand how search engines work
Be able to find out where your users search
Know what is required for each discipline of SEO
Feel confident creating an SEO Plan
Confidently measure SEO performance
The digital marketing industry is changing faster than ever and those who don’t adapt with the times are losing market share. Where should marketers be focusing their efforts? What strategies are the experts seeing get the best results? Get up-to-speed with the latest industry insights, trends and predictions for the future in this panel discussion with some leading digital marketing experts.
Digital Money Maker Club – von Gunnar Kessler digital.focsh890
Title One is a comprehensive examination of the impact of digital technologies on
modern society. In a world where technology continues to advance rapidly, this article delves into the nuances and complexities of the digital age, exploring Its implications across various sectors and aspects of life.
Mastering Multi-Touchpoint Content Strategy: Navigate Fragmented User JourneysSearch Engine Journal
Digital platforms are constantly multiplying, and with that, user engagement is becoming more intricate and fragmented.
So how do you effectively navigate distributing and tailoring your content across these various touchpoints?
Watch this webinar as we dive into the evolving landscape of content strategy tailored for today's fragmented user journeys. Understanding how to deliver your content to your users is more crucial than ever, and we’ll provide actionable tips for navigating these intricate challenges.
You’ll learn:
- How today’s users engage with content across various channels and devices.
- The latest methodologies for identifying and addressing content gaps to keep your content strategy proactive and relevant.
- What digital shelf space is and how your content strategy needs to pivot.
With Wayne Cichanski, we’ll explore innovative strategies to map out and meet the diverse needs of your audience, ensuring every piece of content resonates and connects, regardless of where or how it is consumed.
Digital marketing is the art and science of promoting products or services using digital channels to reach and engage with potential customers. It encompasses a wide range of online tactics and strategies aimed at increasing brand visibility, driving website traffic, generating leads, and ultimately, converting those leads into customers.
https://nidmindia.com/
SMM Cheap - No. 1 SMM panel in the worldsmmpanel567
Boost your social media marketing with our SMM Panel services offering SMM Cheap services! Get cost-effective services for your business and increase followers, likes, and engagement across all social media platforms. Get affordable services perfect for businesses and influencers looking to increase their social proof. See how cheap SMM strategies can help improve your social media presence and be a pro at the social media game.
AI-Powered Personalization: Principles, Use Cases, and Its Impact on CROVWO
In today’s era of AI, personalization is more than just a trend—it’s a fundamental strategy that unlocks numerous opportunities.
When done effectively, personalization builds trust, loyalty, and satisfaction among your users—key factors for business success. However, relying solely on AI capabilities isn’t enough. You need to anchor your approach in solid principles, understand your users’ context, and master the art of persuasion.
Join us as Sarjak Patel and Naitry Saggu from 3rd Eye Consulting unveil a transformative framework. This approach seamlessly integrates your unique context, consumer insights, and conversion goals, paving the way for unparalleled success in personalization.
SEO as the Backbone of Digital MarketingFelipe Bazon
In this talk Felipe Bazon will share how him and his team at Hedgehog Digital share our journey of making C-Levels alike, specially CMOS realize that SEO is the backbone of digital marketing by showing how SEO can contribute to brand awareness, reputation and authority and above all how to use SEO to create more robust global marketing strategies.
Most small businesses struggle to see marketing results. In this session, we will eliminate any confusion about what to do next, solving your marketing problems so your business can thrive. You’ll learn how to create a foundational marketing OS (operating system) based on neuroscience and backed by real-world results. You’ll be taught how to develop deep customer connections, and how to have your CRM dynamically segment and sell at any stage in the customer’s journey. By the end of the session, you’ll remove confusion and chaos and replace it with clarity and confidence for long-term marketing success.
Key Takeaways:
• Uncover the power of a foundational marketing system that dynamically communicates with prospects and customers on autopilot.
• Harness neuroscience and Tribal Alignment to transform your communication strategies, turning potential clients into fans and those fans into loyal customers.
• Discover the art of automated segmentation, pinpointing your most lucrative customers and identifying the optimal moments for successful conversions.
• Streamline your business with a content production plan that eliminates guesswork, wasted time, and money.
Core Web Vitals SEO Workshop - improve your performance [pdf]Peter Mead
Core Web Vitals to improve your website performance for better SEO results with CWV.
CWV Topics include:
- Understanding the latest Core Web Vitals including the significance of LCP, INP and CLS + their impact on SEO
- Optimisation techniques from our experts on how to improve your CWV on platforms like WordPress and WP Engine
- The impact of user experience and SEO
For too many years marketing and sales have operated in silos...while in some forward thinking companies, the two organizations work together to drive new opportunity development and revenue. This session will explore the lessons learned in that beautiful dance that can occur when marketing and sales work together...to drive new opportunity development, account expansion and customer satisfaction.
No, this is not a conversation about MQLs and SQLs. Instead we will focus on a framework that allows the two organizations to drive company success together.
When most people in the industry talk about online or digital reputation management, what they're really saying is Google search and PPC. And it's usually reactive, left dealing with the aftermath of negative information published somewhere online. That's outdated. It leaves executives, organizations and other high-profile individuals at a high risk of a digital reputation attack that spans channels and tactics. But the tools needed to safeguard against an attack are more cybersecurity-oriented than most marketing and communications professionals can manage. Business leaders Leaders grasp the importance; 83% of executives place reputation in their top five areas of risk, yet only 23% are confident in their ability to address it. To succeed in 2024 and beyond, you need to turn online reputation on its axis and think like an attacker.
Key Takeaways:
- New framework for examining and safeguarding an online reputation
- Tools and techniques to keep you a step ahead
- Practical examples that demonstrate when to act, how to act and how to recover
4. Google
Data Centers
Los centros de datos de Google son la
base de toda la plataforma de Google
Cloud. Ofrecen poder computación,
almacenamiento, memoria, GPUs para
nuestras aplicaciones. Además,
alberga el corazón de aplicaciones
como Gmail, Youtube, Search...
● Rapidez
● Baja latencia
● Eficiencia de operaciones
● Eficiencia Energética
● Uso de Energías Renovables
● Cercanía al usuario
● Seguridad de la Información
6. Big Data
Soluciones de Big Data integradas de
principio a fin, que permite capturar los
datos, procesarlos y almacenarlos en
una plataforma integrada. Combina
servicios nativos en la nube y
herramientas Open Source
gestionadas, tanto en tiempo real como
por lotes.
Big Data
BigQuery
Cloud
Dataflow
Cloud
Dataproc
Cloud
Datalab
Cloud
Pub/Sub
Genomics
7. Big Data - Big Query
Tu almacén de
datos corporativo,
rápido, económico
y completamente
gestionado para
análisis de
grandes grupos
de datos
● Ingestión de datos flexible.
● Disponibilidad global.
● Seguridad y permisos integrados.
● Control de coste.
● Altamente disponible.
● Completamente integrado.
● Conecta con otros productos de Google.
8. Big Data - Cloud Dataflow
Servicio
completamente
gestionado y
modelo de
programación
para el proceso de
Big Data
● Gestión de Recursos integrado.
● A demanda.
● Ejecución de los trabajos inteligente.
● Auto escalado.
● Modelo de programación unificado.
● Open Source.
● Monitorizaje.
● Integración.
● Procesado confiable y consistente.
9. Big Data - Cloud Dataproc
Servicio
gestionado Spark
y Hadoop
● Gestión de Cluster integrado.
● Cluster dimensionables.
● Integración.
● Versionado.
● Herramientas de Gestión.
● Acciones de inicialización.
● Gestión manual o automática.
● Máquinas Virtuales flexibles.
10. Big Data
Datalab. Herramienta de exploración, análisis y visualización de
Big Data.
Pub/Sub. Servicio global en tiempo real para gestión de
mensajes y streaming de datos.
11. Big Data
Dataprep. Servicio de datos inteligente que permite explorar,
limpiar y preparar datos estructurados o no para su posterior
análisis.
Data Studio. Convierte tus datos en informes y cuadros de
mando que son sencillos de crear, de compartir, y totalmente
personalizables, desde fuentes de datos como Bigquery,
Analytics o Youtube.
12. Data Lifecycle Steps
Ingest
The first stage is to pull in
the raw data, such as
streaming data from
devices, on-premises
batch data, application
logs, or mobile-app user
events and analytics.
Store
After the data has been
retrieved, it needs to be
stored in a format that is
durable and can be easily
accessed.
Process & Analyze
In this stage, the data is
transformed from raw
form into actionable
information.
Explore & Visualize
The final stage is to
convert the results of the
analysis into a format
that is easy to draw
insights from and to
share with colleagues
and peers.
14. Typical Big Data
Jobs Programming
Resource
provisioning
Performance
tuning
Monitoring
Reliability
Deployment &
configuration
Handling
growing scale
Utilization
improvements
15. Big Data with
Google
Focus on insights.
Not infrastructure.
From batch to real-time.
Programming
Understanding
16. Data & Analytics
Cloud Dataproc
Fully managed Hadoop and Spark with
industry-leading performance
BigQuery
Fully managed data warehouse for
large-scale analytics
Cloud Dataflow
Real-time data pipelines, with open source
SDK via Apache Beam
17. Separation of Storage and Compute
● Access any storage system from any processing tool
● Keep as much data as you want, economically
● Share data in place, no more FTP and copying
Storage
Processing
BigQuery Storage
(tables)
BigQuery Analytics
Cloud Bigtable
(NoSQL)
Cloud Dataproc
Cloud Storage
(files)
Cloud Dataflow
18. 10+ years of Big Data innovation - Open Source
Google
Papers
20082002 2004 2006 2010 2012 2014 2015
GFS
Map
Reduce
Flume
Java
Millwheel
Open
Source
2005
Google
Cloud
Products BigQuery Pub/Sub Dataflow Bigtable
BigTable Dremel PubSub
Tensorflow
Dataflow
Apache
Beam(Incubating)
20. Machine Learning
Google Cloud ML Platform facilita
servicios modernos de machine
learning, con modelos pre-entrenados y
un servicio para generar tus propios
modelos.
Machine Learning
Cloud Machine
Learning
Vision API
Speech
API
Natural
Language API
Translation
API
Jobs API
21. Machine Learning - Cloud ML
Machine
learning sobre
cualquier tipo y
volumen de
datos
● Predicción a escala.
● Construcción de modelos sencilla.
● Capacidades de Aprendizaje Profundo (Deep Learning).
● Integración.
● HyperTune.
● Servicio gestionado y escalable.
● Modelos portables.
22. Machine Learning - APIs
Vision API . Analiza imágenes con el poder
de Google.
Speech API. Convierte conversaciones a
texto con el poder de la nube.
23. Machine Learning - APIs
Natural Language API . Saca conclusiones
de texto desestructurado con Cloud ML.
Translation API. Traduce sobre la marcha
entre miles de pares de lenguas.
24. Machine Learning - APIs
Jobs API . Gestiona tu portal de empleo con
Cloud ML.
Cloud Video Intelligence API. Analiza y
extrae información de tus videos.
25. Referencias para estar al día
Google Cloud Platform Blog
Google Cloud Platform Web
GCP Twitter
Google + GCP Community
GCP Podcast
Google Cloud Platform Canal de Youtube
26. Ejemplos de uso
When art meets big data: Analyzing 200,000 items from The Met
collection in BigQuery
Today we’re adding a new public dataset to
Google BigQuery: over 200,000 items from The
Metropolitan Museum of Art (aka “The Met”),
representing all its public domain art from a
total of 1.5 million art objects. The Met Museum
Public Domain dataset includes metadata about
each piece of art, along with an image or
images of the artifact. Google and The Met
Museum have been close collaborators for
years through Google Arts & Culture and we’re
incredibly excited to bring the museum's public
dataset to BigQuery.
27. Ejemplos de uso
Traveloka’s journey to stream analytics on Google Cloud Platform
Traveloka is a travel technology company based
in Jakarta, Indonesia, currently operating in six
countries. Founded in 2012 by former Silicon
Valley engineers, its goal is to revolutionize
human mobility.
One of the most strategic parts of our business
is a streaming data processing pipeline that
powers a number of use cases, including fraud
detection, personalization, ads optimization,
cross selling, A/B testing, and promotion
eligibility. That pipeline is also used by our
business analysts for monitoring and
understanding business metrics, both for
historical analysis and in real time.
28. Ejemplos de uso
Getting Your Feet Wet in the Data Lake: Analytics 360 in BigQuery
Benefits for Data Engineers, Analysts and
Marketers
As a Big Data platform, BigQuery offers benefits
for multiple stages and roles in the Big Data
process:
For marketers and analysts, you can run ad hoc
queries and get the results within minutes or
seconds. The elusive quest for understanding
online and offline attribution, user funnels, and
long-term customer value comes within reach.
For data engineers, BigQuery offers a
tremendous operational benefit, as outlined in
the next section.
29. Ejemplos de uso
How WePay uses stream analytics for real-time fraud detection
using GCP and Apache Kafka
When payments platform WePay was founded in 2008,
MySQL was our only backend storage. It served its purpose
well when data volume and traffic throughput were relatively
low, but by 2016, our business was growing rapidly and they
were growing along with it. Consequently, we started to see
performance degradation to the point where we could no
longer run concurrent queries without a negative impact on
latency.
Clearly, we needed a new stream analytics pipeline for fraud
detection that would give us answers to queries in near-real
time without affecting our main transactional business
system. In this post, I’ll explain how we built and deployed
such a pipeline to production using Apache Kafka and
Google Cloud Platform (GCP) services like Google Cloud
Dataflow and Cloud Bigtable.