The relationships between data sets matter. Discovering, analyzing, and learning those relationships is a central part to expanding our understand, and is a critical step to being able to predict and act upon the data. Unfortunately, these are not always simple or quick tasks.
To help the analyst we introduce RAPIDS, a collection of open-source libraries, incubated by NVIDIA and focused on accelerating the complete end-to-end data science ecosystem. Graph analytics is a critical piece of the data science ecosystem for processing linked data, and RAPIDS is pleased to offer cuGraph as our accelerated graph library.
Simply accelerating algorithms only addressed a portion of the problem. To address the full problem space, RAPIDS cuGraph strives to be feature-rich, easy to use, and intuitive. Rather than limiting the solution to a single graph technology, cuGraph supports Property Graphs, Knowledge Graphs, Hyper-Graphs, Bipartite graphs, and the basic directed and undirected graph.
A Python API allows the data to be manipulated as a DataFrame, similar and compatible with Pandas, with inputs and outputs being shared across the full RAPIDS suite, for example with the RAPIDS machine learning package, cuML.
This talk will present an overview of RAPIDS and cuGraph. Discuss and show examples of how to manipulate and analyze bipartite and property graph, plus show how data can be shared with machine learning algorithms. The talk will include some performance and scalability metrics. Then conclude with a preview of upcoming features, like graph query language support, and the general RAPIDS roadmap.
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Geoffrey Fox
Keynote at Sixth International Workshop on Cloud Data Management CloudDB 2014 Chicago March 31 2014.
Abstract: We introduce the NIST collection of 51 use cases and describe their scope over industry, government and research areas. We look at their structure from several points of view or facets covering problem architecture, analytics kernels, micro-system usage such as flops/bytes, application class (GIS, expectation maximization) and very importantly data source.
We then propose that in many cases it is wise to combine the well known commodity best practice (often Apache) Big Data Stack (with ~120 software subsystems) with high performance computing technologies.
We describe this and give early results based on clustering running with different paradigms.
We identify key layers where HPC Apache integration is particularly important: File systems, Cluster resource management, File and object data management, Inter process and thread communication, Analytics libraries, Workflow and Monitoring.
See
[1] A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures, Shantenu Jha, Judy Qiu, Andre Luckow, Pradeep Mantha and Geoffrey Fox, accepted in IEEE BigData 2014, available at: http://arxiv.org/abs/1403.1528
[2] High Performance High Functionality Big Data Software Stack, G Fox, J Qiu and S Jha, in Big Data and Extreme-scale Computing (BDEC), 2014. Fukuoka, Japan. http://grids.ucs.indiana.edu/ptliupages/publications/HPCandApacheBigDataFinal.pdf
Before moving to hadoop, one must understand why we need hadoop, irrespective we have all sorts of RDBMS available in the market. This presentation has good understanding of big data and this enables you to properly analyze the use case for big data problems.
The relationships between data sets matter. Discovering, analyzing, and learning those relationships is a central part to expanding our understand, and is a critical step to being able to predict and act upon the data. Unfortunately, these are not always simple or quick tasks.
To help the analyst we introduce RAPIDS, a collection of open-source libraries, incubated by NVIDIA and focused on accelerating the complete end-to-end data science ecosystem. Graph analytics is a critical piece of the data science ecosystem for processing linked data, and RAPIDS is pleased to offer cuGraph as our accelerated graph library.
Simply accelerating algorithms only addressed a portion of the problem. To address the full problem space, RAPIDS cuGraph strives to be feature-rich, easy to use, and intuitive. Rather than limiting the solution to a single graph technology, cuGraph supports Property Graphs, Knowledge Graphs, Hyper-Graphs, Bipartite graphs, and the basic directed and undirected graph.
A Python API allows the data to be manipulated as a DataFrame, similar and compatible with Pandas, with inputs and outputs being shared across the full RAPIDS suite, for example with the RAPIDS machine learning package, cuML.
This talk will present an overview of RAPIDS and cuGraph. Discuss and show examples of how to manipulate and analyze bipartite and property graph, plus show how data can be shared with machine learning algorithms. The talk will include some performance and scalability metrics. Then conclude with a preview of upcoming features, like graph query language support, and the general RAPIDS roadmap.
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...Geoffrey Fox
Keynote at Sixth International Workshop on Cloud Data Management CloudDB 2014 Chicago March 31 2014.
Abstract: We introduce the NIST collection of 51 use cases and describe their scope over industry, government and research areas. We look at their structure from several points of view or facets covering problem architecture, analytics kernels, micro-system usage such as flops/bytes, application class (GIS, expectation maximization) and very importantly data source.
We then propose that in many cases it is wise to combine the well known commodity best practice (often Apache) Big Data Stack (with ~120 software subsystems) with high performance computing technologies.
We describe this and give early results based on clustering running with different paradigms.
We identify key layers where HPC Apache integration is particularly important: File systems, Cluster resource management, File and object data management, Inter process and thread communication, Analytics libraries, Workflow and Monitoring.
See
[1] A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures, Shantenu Jha, Judy Qiu, Andre Luckow, Pradeep Mantha and Geoffrey Fox, accepted in IEEE BigData 2014, available at: http://arxiv.org/abs/1403.1528
[2] High Performance High Functionality Big Data Software Stack, G Fox, J Qiu and S Jha, in Big Data and Extreme-scale Computing (BDEC), 2014. Fukuoka, Japan. http://grids.ucs.indiana.edu/ptliupages/publications/HPCandApacheBigDataFinal.pdf
Before moving to hadoop, one must understand why we need hadoop, irrespective we have all sorts of RDBMS available in the market. This presentation has good understanding of big data and this enables you to properly analyze the use case for big data problems.
Big Data Analytics on Hadoop RainStor InfographicRainStor
A look at how RainStor's compression helps solve the Cost, Complexity and Compliance Risk challenges of managing big data on Hadoop. RainStor runs natively on Hadoop, integrates with YARN and Hue. Can be accessed through Hive, Pig or MapReduce.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Data Science as a Service: Intersection of Cloud Computing and Data SciencePouria Amirian
Dr. Pouria Amirian explains data science, steps in a data science workflow and show some experiments in AzureML. He also mentions about big data issues in a data science project and solutions to them.
Covers basics Artificial neural networks and motivation for deep learning and explains certain deep learning networks, including deep belief networks and autoencoders. It also details challenges of implementing a deep learning network at scale and explains how we have implemented a distributed deep learning network over Spark.
Course in Big Data Analytics in association with IBM
Everyday huge amount of data is created. This data comes from everywhere : sensors used to gather climate information, post to social media sites, digital pictures and videos, purchase transaction records and Cell phone GPS signals to name a few. This data is Big Data.
Big data is a blanket term for any collection of data set so large and complex that it becomes difficult to process using on hand data management tools or traditional data processing applications. The challenges include capture, storage, search, sharing, transfer, analysis and visualization. Anyone who has knowledge on Java, basic UNIX and basic SQL can opt for Big Data training course.
Presented by Michelle Hirsch, Head of MATLAB Product Management, MathWorks on 28th April in Bangalore in joint languages meetup @Walmart.
Companies are scrambling to get insight from the massive quantities of data they collect but are struggling to find employees who combine the deep expertise in computer science, statistics and machine learning, and the domain expertise to truly understand the data. In this talk, Dr. Hirsch discusses how MATLAB enables engineers and scientists to apply their domain expertise to big data analytics.
Highlights:
* Accessing data in large text files, databases, or from the Hadoop Distributed File System (HDFS)
* Using virtual “tall” arrays to process out-of-core data with natural mathematical syntax
Developing machine learning models
* Integrating MATLAB analytics into production systems
About the speaker: Michelle Hirsch, Ph.D. is responsible for driving strategy and direction for MATLAB, the leading programming platform for engineers and scientists. Based outside of Boston, Massachusetts, Michelle is joining our meetup during a trip to meet with MATLAB users across India.
Supporting data: https://www.slideshare.net/CodeOps/flight-test-analysis-final
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
Learn how recent innovation at Google allows you to produce intelligence from IoT data. We will look at some use cases and you will get an overview of the building blocks we use to build truly intelligent IoT solutions in the cloud and on the edge.
An overview of multi-modal image registration methods and their applicaiton with the Insight Toolkit (ITK)
PyData Triangle
November 1st, 2017
Matthew McCormick, PhD, Kitware, Inc
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
FACE-IT is an effort to develop a new IT infrastructure to accelerate existing disciplinary research and enable information transfer among traditionally separate fields. At present, finding data and processing it into usable form can dominate research efforts. By providing ready access to not only data but also the software tools used to process it for specific uses (e.g., climate impact and economic model inputs), FACE-IT allows researchers to concentrate their efforts on analysis. Lowering barriers to data access allows researchers to stretch in new directions and allows researchers to learn and respond to the needs of other fields. FACE-IT builds on the Globus Galaxies platform, which has been developed over the past several years at the University of Chicago. FACE-IT also benefit from substantial software development undertaken by the communities who have developed most of the domain-specific tools required to populate FACE-IT with useful capabilities. The FACE-IT Galaxy manages earth system datatypes (as NetCDF), new tool parameters (dates, map, opendap), aggregated datatypes (RAFT), service providers and cool map visualizers.
Guest Speaker in the 2nd National level webinar titled "Big Data Driven Solutions to Combat Covid 19" on 4th July 2020, Ethiraj College for Women(Auto), Chennai.
Big Data Analytics on Hadoop RainStor InfographicRainStor
A look at how RainStor's compression helps solve the Cost, Complexity and Compliance Risk challenges of managing big data on Hadoop. RainStor runs natively on Hadoop, integrates with YARN and Hue. Can be accessed through Hive, Pig or MapReduce.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Data Science as a Service: Intersection of Cloud Computing and Data SciencePouria Amirian
Dr. Pouria Amirian explains data science, steps in a data science workflow and show some experiments in AzureML. He also mentions about big data issues in a data science project and solutions to them.
Covers basics Artificial neural networks and motivation for deep learning and explains certain deep learning networks, including deep belief networks and autoencoders. It also details challenges of implementing a deep learning network at scale and explains how we have implemented a distributed deep learning network over Spark.
Course in Big Data Analytics in association with IBM
Everyday huge amount of data is created. This data comes from everywhere : sensors used to gather climate information, post to social media sites, digital pictures and videos, purchase transaction records and Cell phone GPS signals to name a few. This data is Big Data.
Big data is a blanket term for any collection of data set so large and complex that it becomes difficult to process using on hand data management tools or traditional data processing applications. The challenges include capture, storage, search, sharing, transfer, analysis and visualization. Anyone who has knowledge on Java, basic UNIX and basic SQL can opt for Big Data training course.
Presented by Michelle Hirsch, Head of MATLAB Product Management, MathWorks on 28th April in Bangalore in joint languages meetup @Walmart.
Companies are scrambling to get insight from the massive quantities of data they collect but are struggling to find employees who combine the deep expertise in computer science, statistics and machine learning, and the domain expertise to truly understand the data. In this talk, Dr. Hirsch discusses how MATLAB enables engineers and scientists to apply their domain expertise to big data analytics.
Highlights:
* Accessing data in large text files, databases, or from the Hadoop Distributed File System (HDFS)
* Using virtual “tall” arrays to process out-of-core data with natural mathematical syntax
Developing machine learning models
* Integrating MATLAB analytics into production systems
About the speaker: Michelle Hirsch, Ph.D. is responsible for driving strategy and direction for MATLAB, the leading programming platform for engineers and scientists. Based outside of Boston, Massachusetts, Michelle is joining our meetup during a trip to meet with MATLAB users across India.
Supporting data: https://www.slideshare.net/CodeOps/flight-test-analysis-final
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
Learn how recent innovation at Google allows you to produce intelligence from IoT data. We will look at some use cases and you will get an overview of the building blocks we use to build truly intelligent IoT solutions in the cloud and on the edge.
An overview of multi-modal image registration methods and their applicaiton with the Insight Toolkit (ITK)
PyData Triangle
November 1st, 2017
Matthew McCormick, PhD, Kitware, Inc
How to expand the Galaxy from genes to Earth in six simple steps (and live sm...Raffaele Montella
FACE-IT is an effort to develop a new IT infrastructure to accelerate existing disciplinary research and enable information transfer among traditionally separate fields. At present, finding data and processing it into usable form can dominate research efforts. By providing ready access to not only data but also the software tools used to process it for specific uses (e.g., climate impact and economic model inputs), FACE-IT allows researchers to concentrate their efforts on analysis. Lowering barriers to data access allows researchers to stretch in new directions and allows researchers to learn and respond to the needs of other fields. FACE-IT builds on the Globus Galaxies platform, which has been developed over the past several years at the University of Chicago. FACE-IT also benefit from substantial software development undertaken by the communities who have developed most of the domain-specific tools required to populate FACE-IT with useful capabilities. The FACE-IT Galaxy manages earth system datatypes (as NetCDF), new tool parameters (dates, map, opendap), aggregated datatypes (RAFT), service providers and cool map visualizers.
Guest Speaker in the 2nd National level webinar titled "Big Data Driven Solutions to Combat Covid 19" on 4th July 2020, Ethiraj College for Women(Auto), Chennai.
Platform for Big Data Analytics and Visual Analytics: CSIRO use cases. Februa...Tomasz Bednarz
Presented at the ACEMS workshop at QUT in February 2015.
Credits: whole project team (names listed in the first slide).
Approved by CSIRO to be shared externally.
DevOps for Data Engineers - Automate Your Data Science Pipeline with Ansible,...Mihai Criveti
Automate your Data Science pipeline with Ansible, Python and Kubernetes - ODSC Talk
What is Data Science and the Data Science Landscape
Process and Flow
Understanding Data
The Data Science Toolkit
The Big Data Challenge
Cloud Computing Solutions
The rise of DevOps in Data Science
Automate your data pipeline with Ansible
Augmented Analytics and Automation in the Age of the Data ScientistWhereScape
At DAMA Day NYC, WhereScape's CTO Neil Barton spoke about the automation of data infrastructure as a necessary component to effectively enable the citizen data scientist and augmented analytics.
Neil also discussed how AI/ML can be used to recommend data ingestion pipelines and models in either supervised or unsupervised paradigms.
Advanced Analytics and Machine Learning with Data VirtualizationDenodo
Watch: https://bit.ly/2DYsUhD
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spent most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Attend this webinar and learn:
- How data virtualization can accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with Denodo
- How you can use the Denodo Platform with large data volumes in an efficient way
- How Prologis accelerated their use of Machine Learning with data virtualization
AWS Summit Berlin 2013 - Big Data AnalyticsAWS Germany
Learn more about the tools, techniques and technologies for working productively with data at any scale. This session will introduce the family of data analytics tools on AWS which you can use to collect, compute and collaborate around data, from gigabytes to petabytes. We'll discuss Amazon Elastic MapReduce, Hadoop, structured and unstructured data, and the EC2 instance types which enable high performance analytics.
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...Denodo
Watch full webinar here: https://bit.ly/3offv7G
Presented at AI Live APAC
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spend most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Watch this on-demand session to learn how companies can use data virtualization to:
- Create a logical architecture to make all enterprise data available for advanced analytics exercise
- Accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- Integrate popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc.
Simplified Machine Learning, Text, and Graph Analytics with Pivotal GreenplumVMware Tanzu
Data is at the center of digital transformation; using data to drive action is how transformation happens. But data is messy, and it’s everywhere. It’s in the cloud and on-premises. It’s in different types and formats. By the time all this data is moved, consolidated, and cleansed, it can take weeks to build a predictive model.
Even with data lakes, efficiently integrating multi-structured data from different data sources and streams is a major challenge. Enterprises struggle with a stew of data integration tools, application integration middleware, and various data quality and master data management software. How can we simplify this complexity to accelerate and de-risk analytic projects?
The data warehouse—once seen as only for traditional business intelligence applications — has learned new tricks. Join James Curtis from 451 Research and Pivotal’s Bob Glithero for an interactive discussion about the modern analytic data warehouse. In this webinar, we’ll share insights such as:
- Why after much experimentation with other architectures such as data lakes, the data warehouse has reemerged as the platform for integrated operational analytics
- How consolidating structured and unstructured data in one environment—including text, graph, and geospatial data—makes in-database, highly parallel, analytics practical
- How bringing open-source machine learning, graph, and statistical methods to data accelerates analytical projects
- How open-source contributions from a vibrant community of Postgres developers reduces adoption risk and accelerates innovation
We thank you in advance for joining us.
Presenter : Bob Glithero, PMM, Pivotal and James Curtis Senior Analyst, 451 Research
A changing market landscape and open source innovations are having a dramatic impact on the consumability and ease of use of data science tools. Join this session to learn about the impact these trends and changes will have on the future of data science. If you are a data scientist, or if your organization relies on cutting edge analytics, you won't want to miss this!
Real-world Cloud HPC at Scale, for Production Workloads (BDT212) | AWS re:Inv...Amazon Web Services
"Running high-performance scientific and engineering applications is challenging no matter where you do it. Join IT executives from Hitachi Global Storage Technology, The Aerospace Corporation, Novartis, and Cycle Computing and learn how they have used the AWS cloud to deploy mission-critical HPC workloads.
Cycle Computing leads the session on how organizations of any scale can run HPC workloads on AWS. Hitachi Global Storage Technology discusses experiences using the cloud to create next-generation hard drives. The Aerospace Corporation provides perspectives on running MPI and other simulations, and offer insights into considerations like security while running rocket science on the cloud. Novartis Institutes for Biomedical Research talks about a scientific computing environment to do performance benchmark workloads and large HPC clusters, including a 30,000-core environment for research in the fight against cancer, using the Cancer Genome Atlas (TCGA)."
Virtualisation de données : Enjeux, Usages & BénéficesDenodo
Watch full webinar here: https://bit.ly/3oah4ng
Gartner a récemment qualifié la Data Virtualisation comme étant une pièce maitresse des architectures d’intégration de données.
Découvrez :
- Les bénéfices d’une plateforme de virtualisation de données
- La multiplication des usages : Lakehouse, Data Science, Big Data, Data Service & IoT
- La création d’une vue unifiée de votre patrimoine de données sans transiger sur la performance
- La construction d’une architecture d’intégration Agile des données : on-premise, dans le cloud ou hybride
Analytical Innovation: How to Build the Next Generation Data PlatformVMware Tanzu
There was a time when the Enterprise Data Warehouse (EDW) was the only way to provide a 360-degree analytical view of the business. In recent years many organizations have deployed disparate analytics alternatives to the EDW, including: cloud data warehouses, machine learning frameworks, graph databases, geospatial tools, and other technologies. Often these new deployments have resulted in the creation of analytical silos that are too complex to integrate, seriously limiting global insights and innovation.
Join guest speaker, 451 Research’s Jim Curtis and Pivotal’s Jacque Istok for an interactive discussion about some of the overarching trends affecting the data warehousing market, as well as how to build a next generation data platform to accelerate business innovation. During this webinar you will learn:
- The significance of a multi-cloud, infrastructure-agnostic analytics
- What is working and what isn’t, when it comes to analytics integration
- The importance of seamlessly integrating all your analytics in one platform
- How to innovate faster, taking advantage of open source and agile software
Speakers: James Curtis, Senior Analyst, Data Platforms & Analytics, 451 Research & Jacque Istok, Head of Data, Pivotal
Similar to BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE ECOSYSTEM (20)
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
BUILDING BETTER PREDICTIVE MODELS WITH COGNITIVE ASSISTANCE IN A DATA SCIENCE ECOSYSTEM
1. BUILDING BETTER PREDICTIVE MODELS WITH
COGNITIVE ASSISTANCE IN A DATA SCIENCE
ECOSYSTEM
Dr. Alex Liu
Chief Data Scientist
Analytics Services @ IBM
aliu@us.ibm.com
Sep 12, 2018
NASA JPL SVCP
2. ALEX LIU INTRODUCTION
Chief Data Scientist – Analytics Services at
IBM
A Data Scientist Thought Leader
Chief Data Scientist for a few corporations
before joined IBM
Taught advanced data analytics for the
University of South California and the
University of California at Irvine
Consulted for the United Nations, Ingram
Micro …
M.S. and Ph.D. from Stanford University
3. DATA SCIENCE: TURNING DATA INTO VALUE WITH MODELS
Data Science produces insights/values via
a complicated proccese
a big set of tools
3
BigInsights
(HDFS)
Cloudant
(DBaaS)
dashDB
(Analytics)
Swift
(Object
Storage)
SQDB
(Managed
DB2)
4. DATA SCIENCE PROJECTS RETURN VERY
VALUABLE RESULTS
BUT A LOT FAILED
Netflix, for example, integrates data science into each part of their
business; they estimate a billion dollars in incremental value from their
personalization and recommendation alone.
Knight Capital Group, for instance, lost $440 million in 45 minutes after
a mistake in updating a model.
Gartner estimated that 60% of big data projects fail in 2016, and in
2017.
Reproducibility crisis & fast insight demands
5. DATA SCIENCE – COMPLICATED
VERY COMPLICATED FLOWS JUST FOR MODEL BUILDING STAGE
5
• More than 50 different algorithms: SVM, Neural Net, Decision Trees/Forests, Naïve Bayes,
Regression, SMO, k-nearest Neighbor, Clustering, Rules, …
• Combinatorially explosive number of parameter choices per algorithm: kernel type, pruning
strategy, number of trees in a forest, learning rate, …
• Wide variation in performance across different algorithm implementations (e.g., SPSS vs
Python vs WEKA vs SPARK …)
• User-Defined algorithms
• Substantial cost in user and compute time
• User spends time on trying new combinations and parameters
• Computational cost for training a single SVM can exceed 24h
• Selection commonly based on data scientist bias
• Each additional pipeline stage increases complexity dramatically!
6. IMPORTANCE OF AUTOMATIONS & COMMUNITIES
AUTOMATION ~ Compare Data Scientist with and without computer-based
augmentation
Show that computer-augmented data science can reduce time-to-result by an order of
magnitude and improve quality of results
COMMUNITY ~ Self-learn and validate using open competitions or
evaluations (e.g., Kaggle, OpenML), IBM customer engagements
6
7. DS ASSISTED BY AI WITHIN A DS COMMUNITY
1) Bring automation into key areas of large-scale data analysis tasks
Overcome “analytic decision overload” for Data Scientists
Enable Data Scientist to:
view and interact with decision making process in an online fashion
obtain rapid insights from data to answer key questions
2) Integrated System of tools, working with DS communities
An integrated system for scientists to easily handle data and analytical and
application needs
Upload and prepare data from various sources
Cross-platform modeling and machine learning implementation
Cross-platform analytic deployments on Big Data platforms
IBM Research
7
Augumention
Vs.
Automation
8. Db2 Warehouse
on Cloud
IBM
Cloudant
RStudio
Jupyter Notebooks
Data Science
Tools
IBM
Cloud Object
Storage
IBM
Compose
IBM Cloud
Data from the IBM Cloud
& third party clouds
On-premises data
Watson Studio
Watson Data
Platform
Persistence
Cloud Services
WATSON STUDIO LOCAL
IBM Analytics
Engine
Data Steward
Data scientist
Spark ML
Hadoop
Data Refinery
Data Engineer
Cognos
Watson Analytics
Dashboards
Developer
One Platform for
IBM Analytics
Team
32 Different
Connections
Plugin
12. TAKING A DATA SCIENCE ECOSYSTEM APPROACH
A DATA SCIENCE ECOSYSTEM HAS THREE BASIC ELEMENTS
1) DATA PORTAL
2) DATA SCIENCE COMMUNITY
3) DATA SCIENCE PLATFORM
13. RMDS COMMUNITIES AT IBM GLENDALE
Pasadena/Glendale Meetup Community
Local face to face community – more than 1100 members
https://www.meetup.com/RMDS_LA/
https://www.linkedin.com/groups/1895501 has 29K participants
Aim to create an environment for utilizing big data analytics to create smart cities and smart commerce
14. 105,000+
collections
349 citizen apps
500,000 data
resources
175 agencies
450 APIs
14
Source: City of LA Mayor’s Tech Advisor Presentation
at RMDS Meetup.
EX1: citizen data science
ecosystem with open data
15. EXAMPLE – 1KM VISIBLE (GOES-R WILL BE
EVEN BETTER)
http://www.ibm.com/weather
16. EX2: A data science ecosystem with weather data
101
010
101
Platform
~ IBM DSX
Weather Data Transaction
Analytical
Insights for
Transformation
Connecting all
the data
scientists from
a DS
community
Applications
Optimizing Operations Solutions
IoT Data
17. A MAJORITY OF RETAIL AND CP EXECUTIVES
INDICATE WEATHER HAS A SIGNIFICANT
IMPACT ON BUSINESS DECISION-MAKING
50%
50%
45%
41%
40%
39%
35%
33%
Work safely
Inventory pricing
Customer interactions
Marketing / messaging
Inventory placements
Routes and transportation
Supply chain / sourcing
Product development
Weather either influences all human decisions or triggers automated actions in the following areas
51%Worker allocation and staff scheduling