This document provides an overview of deep learning and convolutional neural networks from David Solomon, an IBM executive architect. It begins with Solomon's background and credentials. It then defines deep learning, describes how neural networks learn feature hierarchies, and lists common deep learning techniques like convolutional neural networks for image recognition and recurrent neural networks for sequential data. The document explains how deep learning can learn complex patterns from large datasets using GPUs for fast training. It concludes with an example using the MNIST dataset of handwritten digits to demonstrate a simple convolutional neural network model in TensorFlow.
Introduction to Cloud computing and Big Data-HadoopNagarjuna D.N
Cloud Computing Evolution
Why Cloud Computing needed?
Cloud Computing Models
Cloud Solutions
Cloud Jobs opportunities
Criteria for Big Data
Big Data challenges
Technologies to process Big Data- Hadoop
Hadoop History and Architecture
Hadoop Eco-System
Hadoop Real-time Use cases
Hadoop Job opportunities
Hadoop and SAP HANA integration
Summary
This tutorial creates two machine learning models with IBM Watson Studio. The first is created in the UI and the second in Jupyter Notebook using scikit-learn. We will save and deploy bother models on IBM Cloud.
Durant cette présentation, nous introduirons des concepts de bases de la science de la donnée et discuterons d’un projet réalisé chez un de nos client.
Nous découvrirons, comment on peut facilement réaliser des projets de science de la donnée à l’aide du langage de programmation statistique R, ainsi que de son intégration dans la nouvelle suite de Microsoft SQL Server 2016.
Cloud computing involves delivering computing services over the internet. It has three main components: client computers, distributed servers located in different geographic locations, and data centers housing servers and applications. There are three main service models: Software as a Service (SaaS) which provides required software; Platform as a Service (PaaS) which provides operating systems and networks; and Infrastructure as a Service (IaaS) which provides basic network access. Deployment models include public, private, hybrid, and community clouds based on access restrictions. Big data refers to very large amounts of digital data that cannot be analyzed with traditional techniques, and requires distributed processing across cloud infrastructure to gain insights.
Power BI offers customers rapid time-to-value by providing intuitive visual analytics tools that reduce the time needed to gain insights from data. It allows users to connect to various data sources, transform the data, create interactive visualizations and dashboards, and share insights collaboratively. Power BI provides a full stack of business intelligence capabilities including querying, modeling, visualizing, analyzing, and sharing on desktop, online, and mobile platforms.
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your DataCloudera, Inc.
This document discusses how Cloudera Enterprise Data Hub (EDH) can be used for advanced analytics. EDH allows users to perform diverse concurrent analytics on large datasets without moving the data. It includes tools for machine learning, graph analytics, search, and statistical analysis. EDH protects data through security features and system change tracking. The document argues that EDH is the only platform that can support all these analytics capabilities in a single, integrated system. It provides several examples of how advanced analytics on EDH have helped organizations like the government address important problems.
This document provides an overview of deep learning and convolutional neural networks from David Solomon, an IBM executive architect. It begins with Solomon's background and credentials. It then defines deep learning, describes how neural networks learn feature hierarchies, and lists common deep learning techniques like convolutional neural networks for image recognition and recurrent neural networks for sequential data. The document explains how deep learning can learn complex patterns from large datasets using GPUs for fast training. It concludes with an example using the MNIST dataset of handwritten digits to demonstrate a simple convolutional neural network model in TensorFlow.
Introduction to Cloud computing and Big Data-HadoopNagarjuna D.N
Cloud Computing Evolution
Why Cloud Computing needed?
Cloud Computing Models
Cloud Solutions
Cloud Jobs opportunities
Criteria for Big Data
Big Data challenges
Technologies to process Big Data- Hadoop
Hadoop History and Architecture
Hadoop Eco-System
Hadoop Real-time Use cases
Hadoop Job opportunities
Hadoop and SAP HANA integration
Summary
This tutorial creates two machine learning models with IBM Watson Studio. The first is created in the UI and the second in Jupyter Notebook using scikit-learn. We will save and deploy bother models on IBM Cloud.
Durant cette présentation, nous introduirons des concepts de bases de la science de la donnée et discuterons d’un projet réalisé chez un de nos client.
Nous découvrirons, comment on peut facilement réaliser des projets de science de la donnée à l’aide du langage de programmation statistique R, ainsi que de son intégration dans la nouvelle suite de Microsoft SQL Server 2016.
Cloud computing involves delivering computing services over the internet. It has three main components: client computers, distributed servers located in different geographic locations, and data centers housing servers and applications. There are three main service models: Software as a Service (SaaS) which provides required software; Platform as a Service (PaaS) which provides operating systems and networks; and Infrastructure as a Service (IaaS) which provides basic network access. Deployment models include public, private, hybrid, and community clouds based on access restrictions. Big data refers to very large amounts of digital data that cannot be analyzed with traditional techniques, and requires distributed processing across cloud infrastructure to gain insights.
Power BI offers customers rapid time-to-value by providing intuitive visual analytics tools that reduce the time needed to gain insights from data. It allows users to connect to various data sources, transform the data, create interactive visualizations and dashboards, and share insights collaboratively. Power BI provides a full stack of business intelligence capabilities including querying, modeling, visualizing, analyzing, and sharing on desktop, online, and mobile platforms.
Cloudera Breakfast: Advanced Analytics Part II: Do More With Your DataCloudera, Inc.
This document discusses how Cloudera Enterprise Data Hub (EDH) can be used for advanced analytics. EDH allows users to perform diverse concurrent analytics on large datasets without moving the data. It includes tools for machine learning, graph analytics, search, and statistical analysis. EDH protects data through security features and system change tracking. The document argues that EDH is the only platform that can support all these analytics capabilities in a single, integrated system. It provides several examples of how advanced analytics on EDH have helped organizations like the government address important problems.
"Industrializing Machine Learning – How to Integrate ML in Existing Businesse...Dataconomy Media
"Industrializing Machine Learning – How to Integrate ML in Existing Businesses", Erik Schmiegelow, CEO at Hivemind Technologies AG
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2017: http://bit.ly/1WMJAqS
About the Author:
Since 1996, Erik Schmiegelow has worked as a software architecht and consultant, building large data processing platforms for companies such as NTT DoCoMo, Royal Mail, Siemens, E-Plus, Allianz and T-Mobile; and until 2001 he was CTO at the Cologne-based digital agency denkwerk.
In 2007 he founded the telecommunications consulting agency Itellity, followed by Hivemind Technologies in 2014. Hivemind Technologies is a solutions and services company, focussed on big data analytics and stream processing technologies for web, social data and industrial applications. Erik studied computer sciences in Hamburg.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
IBM Watson Jeopardy! white paper which explains Watson’s workload optimised system design based on IBM DeepQA architecture and POWER7® processor-based servers
Democratization - New Wave of Data Science (홍운표 상무, DataRobot) :: AWS Techfor...Amazon Web Services Korea
This document discusses the democratization of data science and machine learning using automated machine learning tools. It provides examples of how DataRobot has helped customers in various industries build predictive models faster and with less coding than traditional approaches. Specifically, it summarizes how DataRobot has helped customers in banking, insurance, retail, and other industries with use cases like predictive maintenance, sales forecasting, fraud detection, customer churn prediction, and insurance underwriting.
Analytics in a Day Ft. Synapse Virtual WorkshopCCG
Say goodbye to data silos! Analytics in a Day will simplify and accelerate your journey towards the modern data warehouse. Join CCG and Microsoft for a half-day virtual workshop, hosted by James McAuliffe.
This document provides an overview of Azure subscriptions and resources. It discusses the different types of Azure subscriptions including free, pay-as-you-go, CSP, and enterprise subscriptions. It also describes public and private cloud computing models. The document outlines how to identify regions and resource groups in Azure, and how to organize, remove, and monitor Azure resources and resource groups.
This document discusses the evolution of cluster computing and resource management. It describes how:
1) Early clusters were single-purpose and used technologies like MapReduce. General purpose cluster OSes like YARN emerged to allow multiple applications on a cluster.
2) YARN improved on Hadoop by decoupling the programming model from resource management, allowing more flexibility and better performance/availability.
3) REEF aims to further improve frameworks by factoring out common functionalities around communication, configuration, and fault tolerance.
Introduction à la gouvernance de données, Philippe Bourgeois, Senior Consultant Trivadis. Conférence donnée dans le cadre du Swiss Data Forum, du 24 novembre 2015 à Lausanne
The document provides recommendations for a big data platform and architecture. It discusses why big data is important, how big data works involving collecting, storing, processing and analyzing data, and then consuming and visualizing insights. It considers whether to use cloud or on-premise solutions. Among cloud providers, it analyzes Google Cloud Platform, AWS, Microsoft Azure and Cloudera Cloud. It ultimately recommends Google Cloud Platform based on tools and support, platform expertise, performance, integration and flexibility. The document then outlines example big data architectures on GCP including a basic data lake workflow, real-time self-reporting dashboard, machine learning integration and data warehouse transition. It also discusses running GCP with existing on-premise systems and understanding
One Size Doesn't Fit All: The New Database Revolutionmark madsen
Slides from a webcast for the database revolution research report (report will be available at http://www.databaserevolution.com)
Choosing the right database has never been more challenging, or potentially rewarding. The options available now span a wide spectrum of architectures, each of which caters to a particular workload. The range of pricing is also vast, with a variety of free and low-cost solutions now challenging the long-standing titans of the industry. How can you determine the optimal solution for your particular workload and budget? Register for this Webcast to find out!
Robin Bloor, Ph.D. Chief Analyst of the Bloor Group, and Mark Madsen of Third Nature, Inc. will present the findings of their three-month research project focused on the evolution of database technology. They will offer practical advice for the best way to approach the evaluation, procurement and use of today’s database management systems. Bloor and Madsen will clarify market terminology and provide a buyer-focused, usage-oriented model of available technologies.
Webcast video and audio will be available on the report download site as well.
Watson Analytics is a cloud-based analytics tool from IBM that leverages Watson technology to accelerate data discovery for business users. It provides semantic recognition of data concepts, identifies analysis starting points, and allows natural language interaction. The tool automates tasks like data preparation, generates insights and visualizations, and enables predictive analytics. It aims to make analytics more self-service, collaborative, and accessible to non-experts.
Joseph keynote @ Microsoft Data Amp, April 2017SeokJin Han
Joseph Sirosh, Corporate Vice President for the Data Group at Microsoft, give his keynote at Microsoft Data Amp 2017. Watch the video at: https://www.microsoft.com/en-us/sql-server/data-amp#
Abstract. Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., predictiction, classification. In this talk, Debmalya Biswas will present the emerging paradigm of Compositional AI, also known as, Compositional Learning. Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In an enterprise context, this enables reuse, agility, and efficiency in development and maintenance efforts.
This document discusses developing analytics applications using machine learning on Azure Databricks and Apache Spark. It begins with an introduction to Richard Garris and the agenda. It then covers the data science lifecycle including data ingestion, understanding, modeling, and integrating models into applications. Finally, it demonstrates end-to-end examples of predicting power output, scoring leads, and predicting ratings from reviews.
A high level overview of common Cassandra use cases, adoption reasons, BigData trends, DataStax Enterprise and the future of BigData given at the 7th Advanced Computing Conference in Seoul, South Korea
Introduction to Big data with Hadoop & Spark | Big Data Hadoop Spark Tutorial...CloudxLab
(Big Data with Hadoop & Spark Training: http://bit.ly/2k2wiL9
This CloudxLab Big Data with Hadoop and Spark tutorial helps you to understand Big Data in detail. Below are the topics covered in this tutorial:
1) Data Variety
2) What is Big Data?
3) Characteristics of Big Data - Volume, Velocity, and Variety
4) Why Big Data and why it is important now?
5) Example Big Data Customers
6) Big Data Solutions
7) What is Hadoop?
8) Hadoop Components
9) Apache Spark Introduction & Architecture
Do you have a true Big Data Analytics platform? What's a true Big Data Analytics platform? How can it help capitalize big data? What's needed to build one? This short introductory presentation can help understand what's a true Big Data Analytics platform and how it really helps building Big Data Analytics applications.
While many enterprises consider cloud computing the savior of their data strategy, there is a process they should be following when looking to leveraging database-as-a-service. This includes understanding their own data requirements, selecting the right cloud computing candidate, and then planning for the migration and operations. A huge number of issues and obstacles will inevitably arise, but fortunately best practices are emerging. This presentation will take you through the process of moving data to cloud computing providers.
Simplifying Building Automation: Leveraging Semantic Tagging with a New Breed...Memoori
Memoori's 10th Webinar in the 2019 Smart Buildings Series. We spoke with Chris Irwin, VP Sales EMEA & Asia at J2 Innovations about the FIN 5 software framework and “Simplifying Building Automation by Leveraging Semantic Tagging with a New Breed of Software”.
"Industrializing Machine Learning – How to Integrate ML in Existing Businesse...Dataconomy Media
"Industrializing Machine Learning – How to Integrate ML in Existing Businesses", Erik Schmiegelow, CEO at Hivemind Technologies AG
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2017: http://bit.ly/1WMJAqS
About the Author:
Since 1996, Erik Schmiegelow has worked as a software architecht and consultant, building large data processing platforms for companies such as NTT DoCoMo, Royal Mail, Siemens, E-Plus, Allianz and T-Mobile; and until 2001 he was CTO at the Cologne-based digital agency denkwerk.
In 2007 he founded the telecommunications consulting agency Itellity, followed by Hivemind Technologies in 2014. Hivemind Technologies is a solutions and services company, focussed on big data analytics and stream processing technologies for web, social data and industrial applications. Erik studied computer sciences in Hamburg.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
IBM Watson Jeopardy! white paper which explains Watson’s workload optimised system design based on IBM DeepQA architecture and POWER7® processor-based servers
Democratization - New Wave of Data Science (홍운표 상무, DataRobot) :: AWS Techfor...Amazon Web Services Korea
This document discusses the democratization of data science and machine learning using automated machine learning tools. It provides examples of how DataRobot has helped customers in various industries build predictive models faster and with less coding than traditional approaches. Specifically, it summarizes how DataRobot has helped customers in banking, insurance, retail, and other industries with use cases like predictive maintenance, sales forecasting, fraud detection, customer churn prediction, and insurance underwriting.
Analytics in a Day Ft. Synapse Virtual WorkshopCCG
Say goodbye to data silos! Analytics in a Day will simplify and accelerate your journey towards the modern data warehouse. Join CCG and Microsoft for a half-day virtual workshop, hosted by James McAuliffe.
This document provides an overview of Azure subscriptions and resources. It discusses the different types of Azure subscriptions including free, pay-as-you-go, CSP, and enterprise subscriptions. It also describes public and private cloud computing models. The document outlines how to identify regions and resource groups in Azure, and how to organize, remove, and monitor Azure resources and resource groups.
This document discusses the evolution of cluster computing and resource management. It describes how:
1) Early clusters were single-purpose and used technologies like MapReduce. General purpose cluster OSes like YARN emerged to allow multiple applications on a cluster.
2) YARN improved on Hadoop by decoupling the programming model from resource management, allowing more flexibility and better performance/availability.
3) REEF aims to further improve frameworks by factoring out common functionalities around communication, configuration, and fault tolerance.
Introduction à la gouvernance de données, Philippe Bourgeois, Senior Consultant Trivadis. Conférence donnée dans le cadre du Swiss Data Forum, du 24 novembre 2015 à Lausanne
The document provides recommendations for a big data platform and architecture. It discusses why big data is important, how big data works involving collecting, storing, processing and analyzing data, and then consuming and visualizing insights. It considers whether to use cloud or on-premise solutions. Among cloud providers, it analyzes Google Cloud Platform, AWS, Microsoft Azure and Cloudera Cloud. It ultimately recommends Google Cloud Platform based on tools and support, platform expertise, performance, integration and flexibility. The document then outlines example big data architectures on GCP including a basic data lake workflow, real-time self-reporting dashboard, machine learning integration and data warehouse transition. It also discusses running GCP with existing on-premise systems and understanding
One Size Doesn't Fit All: The New Database Revolutionmark madsen
Slides from a webcast for the database revolution research report (report will be available at http://www.databaserevolution.com)
Choosing the right database has never been more challenging, or potentially rewarding. The options available now span a wide spectrum of architectures, each of which caters to a particular workload. The range of pricing is also vast, with a variety of free and low-cost solutions now challenging the long-standing titans of the industry. How can you determine the optimal solution for your particular workload and budget? Register for this Webcast to find out!
Robin Bloor, Ph.D. Chief Analyst of the Bloor Group, and Mark Madsen of Third Nature, Inc. will present the findings of their three-month research project focused on the evolution of database technology. They will offer practical advice for the best way to approach the evaluation, procurement and use of today’s database management systems. Bloor and Madsen will clarify market terminology and provide a buyer-focused, usage-oriented model of available technologies.
Webcast video and audio will be available on the report download site as well.
Watson Analytics is a cloud-based analytics tool from IBM that leverages Watson technology to accelerate data discovery for business users. It provides semantic recognition of data concepts, identifies analysis starting points, and allows natural language interaction. The tool automates tasks like data preparation, generates insights and visualizations, and enables predictive analytics. It aims to make analytics more self-service, collaborative, and accessible to non-experts.
Joseph keynote @ Microsoft Data Amp, April 2017SeokJin Han
Joseph Sirosh, Corporate Vice President for the Data Group at Microsoft, give his keynote at Microsoft Data Amp 2017. Watch the video at: https://www.microsoft.com/en-us/sql-server/data-amp#
Abstract. Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., predictiction, classification. In this talk, Debmalya Biswas will present the emerging paradigm of Compositional AI, also known as, Compositional Learning. Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In an enterprise context, this enables reuse, agility, and efficiency in development and maintenance efforts.
This document discusses developing analytics applications using machine learning on Azure Databricks and Apache Spark. It begins with an introduction to Richard Garris and the agenda. It then covers the data science lifecycle including data ingestion, understanding, modeling, and integrating models into applications. Finally, it demonstrates end-to-end examples of predicting power output, scoring leads, and predicting ratings from reviews.
A high level overview of common Cassandra use cases, adoption reasons, BigData trends, DataStax Enterprise and the future of BigData given at the 7th Advanced Computing Conference in Seoul, South Korea
Introduction to Big data with Hadoop & Spark | Big Data Hadoop Spark Tutorial...CloudxLab
(Big Data with Hadoop & Spark Training: http://bit.ly/2k2wiL9
This CloudxLab Big Data with Hadoop and Spark tutorial helps you to understand Big Data in detail. Below are the topics covered in this tutorial:
1) Data Variety
2) What is Big Data?
3) Characteristics of Big Data - Volume, Velocity, and Variety
4) Why Big Data and why it is important now?
5) Example Big Data Customers
6) Big Data Solutions
7) What is Hadoop?
8) Hadoop Components
9) Apache Spark Introduction & Architecture
Do you have a true Big Data Analytics platform? What's a true Big Data Analytics platform? How can it help capitalize big data? What's needed to build one? This short introductory presentation can help understand what's a true Big Data Analytics platform and how it really helps building Big Data Analytics applications.
While many enterprises consider cloud computing the savior of their data strategy, there is a process they should be following when looking to leveraging database-as-a-service. This includes understanding their own data requirements, selecting the right cloud computing candidate, and then planning for the migration and operations. A huge number of issues and obstacles will inevitably arise, but fortunately best practices are emerging. This presentation will take you through the process of moving data to cloud computing providers.
Simplifying Building Automation: Leveraging Semantic Tagging with a New Breed...Memoori
Memoori's 10th Webinar in the 2019 Smart Buildings Series. We spoke with Chris Irwin, VP Sales EMEA & Asia at J2 Innovations about the FIN 5 software framework and “Simplifying Building Automation by Leveraging Semantic Tagging with a New Breed of Software”.
Challenges of Operationalising Data Science in Productioniguazio
The presentation topic for this meet-up was covered in two sections without any breaks in-between
Section 1: Business Aspects (20 mins)
Speaker: Rasmi Mohapatra, Product Owner, Experian
https://www.linkedin.com/in/rasmi-m-428b3a46/
Once your data science application is in the production, there are many typical data science operational challenges experienced today - across business domains - we will cover a few challenges with example scenarios
Section 2: Tech Aspects (40 mins, slides & demo, Q&A )
Speaker: Santanu Dey, Solution Architect, Iguazio
https://www.linkedin.com/in/santanu/
In this part of the talk, we will cover how these operational challenges can be overcome e.g. automating data collection & preparation, making ML models portable & deploying in production, monitoring and scaling, etc.
with relevant demos.
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
ADV Slides: What the Aspiring or New Data Scientist Needs to Know About the E...DATAVERSITY
Many data scientists are well grounded in creating accomplishment in the enterprise, but many come from outside – from academia, from PhD programs and research. They have the necessary technical skills, but it doesn’t count until their product gets to production and in use. The speaker recently helped a struggling data scientist understand his organization and how to create success in it. That turned into this presentation, because many new data scientists struggle with the complexities of an enterprise.
How to Get Cloud Architecture and Design Right the First TimeDavid Linthicum
The document discusses best practices for designing cloud architecture and getting cloud implementation right the first time. It covers proper ways to leverage, design, and build cloud-based systems and infrastructure, going beyond hype to advice from those with real-world experience making cloud computing work. The document provides guidance on common mistakes to avoid and emerging architectural patterns to follow.
Building machine learning muscle in your team & transitioning to make them do machine learning at scale. We also discuss about Spark & other relevant technologies.
Bridging the Gap: Analyzing Data in and Below the CloudInside Analysis
The Briefing Room with Dean Abbott and Tableau Software
Live Webcast July 23, 2013
http://www.insideanalysis.com
Today’s desire for analytics extends well beyond the traditional domain of Business Intelligence. That’s partly because business users are realizing the value of mixing and matching all kinds of data, from all kinds of sources. One emerging market driver is Cloud-based data, and the desire companies have to analyze this data cohesively with their on-premise data sets.
Register for this episode of The Briefing Room to learn from Analyst Dean Abbott, who will explain how the ability to access data in the cloud can play a critical role for generating business value from analytics. He’ll be briefed by Ellie Fields of Tableau Software who will tout Tableau’s latest release, which includes native connectors to cloud-based applications like Salesforce.com, Amazon Redshift, Google Analytics and BigQuery. She’ll also demonstrate how Tableau can combine cloud data with other data sources, including spreadsheets, databases, cubes and even Big Data.
The document discusses leveraging the cloud to architect digital solutions. It covers state-of-the-art IoT technology, machine learning clustering and classification prototypes, Cortana analytics, and patterns and anti-patterns for building solutions. The document demonstrates table storage and machine learning clustering of data. It presents an Azure IoT reference architecture and discusses visualizing machine learning results and deriving business value from big data.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
This document discusses engineering machine learning data pipelines and addresses five big challenges: 1) scattered and difficult to access data, 2) data cleansing at scale, 3) entity resolution, 4) tracking data lineage, and 5) ongoing real-time changed data capture and streaming. It presents DMX Change Data Capture as a solution to capture changes from various data sources and replicate them in real-time to targets like Kafka, HDFS, databases and data lakes to feed machine learning models. Case studies demonstrate how DMX-h has helped customers like a global hotel chain and insurance and healthcare companies build scalable data pipelines.
The document discusses Microsoft's approach to implementing a data mesh architecture using their Azure Data Fabric. It describes how the Fabric can provide a unified foundation for data governance, security, and compliance while also enabling business units to independently manage their own domain-specific data products and analytics using automated data services. The Fabric aims to overcome issues with centralized data architectures by empowering lines of business and reducing dependencies on central teams. It also discusses how domains, workspaces, and "shortcuts" can help virtualize and share data across business units and data platforms while maintaining appropriate access controls and governance.
Modern apps and services are leveraging data to change the way we engage with users in a more personalized way. Skyla Loomis talks big data, analytics, NoSQL, SQL and how IBM Cloud is open for data.
Learn more by visiting our Bluemix Hybrid page: http://ibm.co/1PKN23h
As part of this session, I will be giving an introduction to Data Engineering and Big Data. It covers up to date trends.
* Introduction to Data Engineering
* Role of Big Data in Data Engineering
* Key Skills related to Data Engineering
* Role of Big Data in Data Engineering
* Overview of Data Engineering Certifications
* Free Content and ITVersity Paid Resources
Don't worry if you miss the video - you can click on the below link to go through the video after the schedule.
https://youtu.be/dj565kgP1Ss
* Upcoming Live Session - Overview of Big Data Certifications (Spark Based) - https://www.meetup.com/itversityin/events/271739702/
Relevant Playlists:
* Apache Spark using Python for Certifications - https://www.youtube.com/playlist?list=PLf0swTFhTI8rMmW7GZv1-z4iu_-TAv3bi
* Free Data Engineering Bootcamp - https://www.youtube.com/playlist?list=PLf0swTFhTI8pBe2Vr2neQV7shh9Rus8rl
* Join our Meetup group - https://www.meetup.com/itversityin/
* Enroll for our labs - https://labs.itversity.com/plans
* Subscribe to our YouTube Channel for Videos - http://youtube.com/itversityin/?sub_confirmation=1
* Access Content via our GitHub - https://github.com/dgadiraju/itversity-books
* Lab and Content Support using Slack
Smarter businesses apply AI to learn and continuously evolve the way they work. To extract full value from AI, companies need data strategy that gives them access to all their data – no matter where it lives – in an environment that easily scales and applies the latest discovery technology including advanced analytics, visualization and AI. Learn how IBM Watson and Data provides all the tools companies need to embed AI, machine learning and deep learning in their business, while enabling professionals to gain the most from their data to drive smarter business and lead industry-changing transformations.
This document provides an overview of how to build your own personalized search and discovery tool like Microsoft Delve by combining machine learning, big data, and SharePoint. It discusses the Office Graph and how signals across Office 365 are used to populate insights. It also covers big data concepts like Hadoop and machine learning algorithms. Finally, it proposes a high-level architectural concept for building a Delve-like tool using Azure SQL Database, Azure Storage, Azure Machine Learning, and presenting insights.
How to build your own Delve: combining machine learning, big data and SharePointJoris Poelmans
You are experiencing the benefits of machine learning everyday through product recommendations on Amazon & Bol.com, credit card fraud prevention, etc… So how can we leverage machine learning together with SharePoint and Yammer. We will first look into the fundamentals of machine learning and big data solutions and next we will explore how we can combine tools such as Windows Azure HDInsight, R, Azure Machine Learning to extend and support collaboration and content management scenarios within your organization.
Denodo DataFest 2016: Big Data Virtualization in the CloudDenodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/kahTgf
Many firms are adopting “cloud first” strategy and are migrating their on-premises technologies to the cloud. Logitech is one of them. They have adopted the AWS platform and big data on the cloud for all of their analytical needs, including Amazon Redshift and S3.
In this presentation, the Principal of Big Data and Analytics team at Logitech, Avinash Deshpande will present:
• The business rationale for migrating to the cloud
• How data virtualization enables the migration
• Running data virtualization itself in the cloud
This session also includes a panel discussion with:
• Avinash Deshpande, Principal – Big Data and Analytics at Logitech
• Kurt Jackson, Platform Lead at Autodesk
• Dan Young, Chief Data Architect at Indiana University
• Paul Moxon, Head of Product Management at Denodo (as moderator)
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
Want to know more about Common Data Model and Service? You need to understant what's the difference between CDS for Apps and Analytics? Feel free to use these slides and send me your feed backs.
This document discusses security considerations for Software as a Service (SaaS) applications. It notes that SaaS providers implement some security controls, but they may not meet all organizational requirements. It recommends using Cloud Access Security Brokers (CASBs) to enforce enterprise security policies for cloud applications and gain visibility into user activity. The document outlines CASB architecture options and benefits, such as detecting shadow IT, controlling SaaS access, and protecting company data in SaaS applications. It emphasizes starting with a small implementation and adding functionality over time.
IBM Cloud Pak for Data is a unified platform that simplifies data collection, organization, and analysis through an integrated cloud-native architecture. It allows enterprises to turn data into insights by unifying various data sources and providing a catalog of microservices for additional functionality. The platform addresses challenges organizations face in leveraging data due to legacy systems, regulatory constraints, and time spent preparing data. It provides a single interface for data teams to collaborate and access over 45 integrated services to more efficiently gain insights from data.
Similar to What is Data as a Service by T-Mobile Principle Technical PM (20)
Webinar: The Art of Prioritizing Your Product Roadmap by AWS Sr PM - TechProduct School
The document discusses prioritizing a product roadmap by selecting parameters, scoring features, and mapping them on a value vs effort framework. It recommends clearly defining roadmap objectives, choosing a customizable framework like value vs effort, selecting parameters like revenue and customer needs for scoring features, and categorizing investments as strategic, easy wins or maintenance based on the scoring to effectively set the product direction.
Harnessing the Power of GenAI for Exceptional Product Outcomes by Booking.com...Product School
This document discusses harnessing the power of generative AI to improve product outcomes. It describes generative AI as a type of machine learning that allows computers to generate new and original ideas, like a creative chef using knowledge gained from recipes. The author discusses opportunities for generative AI across major business areas like demand generation, productivity, and products. Specific opportunities for Booking.com are explored, like better understanding customer intent and personalized recommendations. The author's vision is for systems that understand users in their natural language and help shape trip intent in a dynamic way that best serves customer needs.
Relationship Counselling: From Disjointed Features to Product-First Thinking ...Product School
The document discusses how Adyen improved its products by shifting from disjointed feature development to product-first thinking. Previously, Adyen had too many OKRs, complex metrics, and local success metrics that led to isolated components and fragmented experiences. It moved to fewer prioritized OKRs, global metrics, and end-to-end product management. This unified its offerings, improved the customer experience, and increased full funnel conversion rates by up to 300 basis points through its integrated risk, authentication, and optimization products working holistically.
Launching New Products In Companies Where It Matters Most by Product Director...Product School
This document discusses lessons learned from launching new products at large companies. It outlines three key lessons: 1) Figure out a clear strategic "why" for the new product that aligns with the company's overall strategy. 2) Really listen to stakeholders across the organization to understand their needs. 3) Assemble a cross-functional team that can get support and input from different parts of the organization, but isn't too large that it becomes unwieldy. The document emphasizes the importance of understanding strategic context, stakeholder needs, and effective team composition for successful new product launches at established companies.
Revolutionizing The Banking Industry: The Monzo Way by CPO, MonzoProduct School
Monzo is revolutionizing the banking industry by taking a customer-first approach called "The Monzo Way." This involves starting from first principles, building products through constant dialogue with users, and piloting internally before growth. Monzo gathers extensive customer feedback and has conducted over 500 research interviews and reports. It strives for industry-leading customer service and uses this research to develop innovative new products for investments and home ownership tailored to customer needs. Monzo's community-focused approach has helped it become the UK's highest rated bank for overall service quality for four years running.
Synergy in Leadership and Product Excellence: A Blueprint for Growth by CPO, ...Product School
This document discusses synergy between leadership and product excellence. It provides a blueprint for growth with three pathways: 1) an agile, retrospective culture, 2) rapid learning and experimentation, and 3) transparency and feedback culture. Ultimately, career fulfillment comes from aligning skills and passions, whether as an individual contributor or manager, by embracing what brings joy and taking a holistic approach to growth.
Act Like an Owner, Challenge Like a VC by former CPO, TripadvisorProduct School
The document discusses how product teams can act like owners and investors to maximize returns. It recommends following three principles: 1) The investment principle - treat time as an investment that should generate ROI. 2) The capping principle - limit ambitions based on discovery. 3) The portfolio principle - allocate resources across a portfolio of high-risk/high-reward, medium-risk, and low-risk/low-hanging fruit initiatives based on their potential ROI. Managing product work like a VC portfolio can help product teams act like owners and challenge stakeholders to seek maximum returns.
The Future of Product, by Founder & CEO, Product SchoolProduct School
Product teams will need to contribute directly to revenue growth, not just user value. They will sit at the intersection of technology and business. Artificial intelligence will allow product teams to do more with less people by automating tasks and providing insights. To succeed in this new era, companies must empower their product teams with the right skills and integrate them closely with other functions like marketing, sales, and customer success.
Webinar How PMs Use AI to 10X Their Productivity by Product School EiR.pdfProduct School
Explore AI tools hands-on and smoothly integrate them into your work routine. This practical experience is here to empower you, offering insights into the mindset of successful Product Managers. Learn the skills to become a more effective Product Manager.
Main Takeaways:
Hands-On AI Integration:
Learn practical strategies for integrating AI tools into your workflow effectively.
Mindset Insights for Success:
Gain valuable insights into the mindset of successful Product Managers, unlocking the secrets to their achievements.
Skill Empowerment for Growth:
Acquire essential skills that empower your evolution toward becoming a more effective and impactful Product Manager.
Webinar: Using GenAI for Increasing Productivity in PM by Amazon PM LeaderProduct School
In this webinar, you will learn how AI can take work off your plate, allowing you to focus on deep thinking or critical work. Cut out the drudge work in Product Management and get more out of your day.
Learnings:
Improve workflows that are high frequency - "manual tasks"
Increase the quality of output that has high importance - "brainy tasks"
Put GenAI to work today
Unlocking High-Performance Product Teams by former Meta Global PMMProduct School
Main Takeaways:
- High-Performing Team Dynamics: You’ll gain insights into fostering high-performance teamwork.
- Unveiling Team Personas: You’ll learn about different personas in the team and how to foster these differences.
- Decoding the Team Needs x Productivity Equation: You’ll learn about different team needs and how they correlate with engagement and productivity.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
14. Howmuchdatawegenerateeveryday ?
• 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of
Things (IoT).
• Over the last two years alone 90 percent of the data in the world was generated.
• Internet:
• More than 3.7 billion humans use the internet (that’s a growth rate of 7.5 percent over 2016).
• On average, Google now processes more than 40,000 searches EVERY second (3.5 billion searches per day)
• Social Media :
• Snapchat users share 527,760 photos
• More than 120 professionals join LinkedIn
• Users watch 4,146,600 YouTube videos
• 456,000 tweets are sent on Twitter
• Instagram users post 46,740 photos
• Communication: 16 million text messages,990,000 Tinder swipes,156 million emails are sent; worldwide it is expected that there will be
2.9 billion email users by 2019,103,447,520 spam emails sent
• Services: Weather Channel receives 18,055,556 forecast requests,Uber riders take 45,788 trips!
• IOT
• https://web-assets.domo.com/blog/wp-content/uploads/2017/07/17_domo_data-never-sleeps-5-01.png
18. WhatisthePracticeofMachineLearning?
• MLalgorithmsaregreatat findingpatterns
thatlooklikewhat youtellthemtolookfor
• Theywillnothelpyoufigureout whattype
ofpatternsmakesense foryourproblem
• Theywillalwaysfindpatterns,but willnot
quantifyhowstrongthesepatternsreally
areinthedata
Thisisthepractice ofML:
Understandadditionallywhen each model is
appropriate, howtoevaluatesuccess,andwhat
toolsare availablefortailoringeach model toyour
specific problem
ThisisMLinterpretednarrowly: Understand
howthemodels work
24. Probabilities: Monty Hall Problem
https://en.wikipedia.org/wiki/Monty_Hall_problem
1 2 3
http://www.montyhallproblem.com
Scenario 1: You decide to stay with your originalchoice of door
Product
rule!
Probabilityofwinningtheprizeinscenario1:1/3+0=1/3 Sum of disjoint
events!
28. MLThemes&BestPractices
• Themes
• Bias/variance tradeoff
• Optimality/efficiency tradeoff
• Thereisnever a“right” answer
• Different tools andmodels areappropriate in different situations.
• Bestpractices
• Usecross-validation for parameter tuning
• Maintain aseparate validation set
• Measure quality withprecision, recall, f1
• Usepackages (don’twrite yourownalgos, unlessit’s for learning)
• Gettoknowyour data
• Trylotsof different things,butexplore inaprincipled way
29. DigitalTransformation:
• Digital transformation is theintegration of digital technology intoall areasof business,fundamentally changinghowyou operate and deliver value to
customers.
• Customerexperience
• Operational agility
• Cultureandleadership
• Workforce enablement enablement
• Digital technology Integration
• NoSQLDatabases :
a. Column: Each storageblockcontainsdata fromonlyonecolumnEx: Accumulo,Cassandra,Scylla, ApacheDruid, HBase,Vertica.
b. Document: It storesdocuments made upoftagged elementsEx: ApacheCouchDB,ArangoDB, BaseX, Clusterpoint,Couchbase,
CosmosDB,IBM Domino, MarkLogic, MongoDB, OrientDB,Qizx, RethinkDB
c. Key-value: IthasaBigHashTable ofkeys &valuesEx: Aerospike,Apache Ignite,ArangoDB, BerkeleyDB,Couchbase,Dynamo,
FoundationDB, InfinityDB,MemcacheDB, MUMPS,Oracle NoSQL Database, OrientDB,Redis,Riak,SciDB, SDBM/Flat File dbm, ZooKeeper
d. Graph: Anetworkdatabase thatusesedges andnodes torepresentandstoredata. AllegroGraph, ArangoDB, InfiniteGraph,Apache
Giraph, MarkLogic, Neo4J, OrientDB, Virtuoso
DigitalTransformation
30. DataAsaService
DaaS is an approach to make data available whenever it is needed, and fits into the larger “SOA” Service-oriented Architecture design pattern.
DaaS is an approach, within SOA, that values, shares and focuses on data.
Why Data as a Service?
• First of all – creating a bunch of services, which move data around, does not constitute DaaS, unless they are designed to yield certain
benefits.
Herearekeybenefits thatmotivate anddefine DaaS:
1.Valuable, re-usable and uniform :
Data Services in a DaaS environment should have value across multiple projects, and the value of the data formats and data services
should be designed to both outlast and exceed the value of the particular systems that first use the data services.
2.Secure
Security, in particular, must be uniform and ubiquitous. It is a barrier to adoption if some underlying systems use different security
models. Different groups will not share data without built-in security, and too much data without controls becomes a privacy and
compliance risk.
3.Virtual data and abstraction
Data Services should abstract away from underlying data stores and locations, including “silo busting” combinations of data from
multiple sources in multiple formats, presented seamlessly as one service.
4.Do not focus on the plumbing (enabling technologies)
Think about what would happen if you hired a plumber as the architect for your house. You’d likely end up with pipes, valves and other
exposed internals running through your living room – complicating and cluttering, rather than making your house livable. Plumbing
should be hidden and transparent, and invisibly enable your structure to function.
31. DataAsaService
DaaS is an architectural pattern. Most developers know how to add SOAP services with WSDL definitions or REST calls, and passing XML (or
JSON or RDF) around. This technique may be necessary, but doing so gratuitously does not help create a DaaS architecture. Focus on data formats
and service definitions, not the protocols and technologies used to expose and wire them up.
5.Don’t Confuse DaaS With Cloud
Just as plumbing is an enabling technology, cloud computing is an infrastructure approach. DaaS is about the architecture, so must focus on how data
is formatted and transmitted, and the interfaces between subsystems. What servers a system runs on is very important, but should not be confused
with the DaaS pattern. Yes, you can put a server hosting DaaS services in the cloud.
6.Understand Why DaaS is Not an Enterprise Data Warehouse
EDW efforts often fail because of modeling complexity. DaaS is more agile in that you can roll out individual services without modeling your entire
enterprise first.
A “big design up front” modeling exercise that involves underlying databases and E-R diagrams will have the same failure modes as a large Enterprise
Data Warehouse. Which is to say: many
7.Forget about relational modeling (at least at first)
In DaaS, the service formats are king – aka the “wire” formats used to integrate components in your enterprise. The point is to abstract away from
which underlying system or systems participate in serving the request, including abstracting away from your relational database and its physical
model. The underlying systems could be microservice from relational databases, search engines, NoSQL databases.
8.Don’t let data service development be anyone’s second job
A mentor of mine once pointed out that “every organization is destined to build an enterprise architecture that mirrors their org chart” and I have found
that to be absolutely true.
If you let your business service modelers, developers or DBAs define your data services, you will end up with services that are only good for the
immediate task at hand, and do not provide the lasting value and abstractions you need for a good DaaS architecture.
Instead, empower a team to own the data services and take a stand for clean data services that yield lasting value. That debate and negotiation will
improve your entire enterprise
33. www.productschool.com
Part-time Product Management, Coding, Data Analytics, Digital
Marketing, UX Design and Product Leadership courses in San
Francisco, Silicon Valley, New York, Santa Monica, Los Angeles,
Austin, Boston, Boulder, Chicago, Denver, Orange County,
Seattle, Bellevue, Washington DC, Toronto, London and Online