In this lecture we analyse some design challenges and approaches to envision systems that help to take decisions in the era of big data and advanced interaction
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
Data-centric design and the knowledge graphAlan Morrison
The #knowledgegraph--smart data that can describe your business and its domains--is now eating software. We won't be able to scale AI or other emerging tech without knowledge graphs, because those techs all require a transformed data foundation, large-scale integration, and shared data infrastructure.
Key to knowledge graphs are #semantics, #graphdatabase technology and a Tinker Toy-style approach to adding the missing verbs (which provide connections and context) back into your data. A knowledge graph foundation provides a means of contextualizing business domains, your content and other data, for #AI at scale.
This is from a talk I gave at the Data Centric Design for SMART DATA & CONTENT Enthusiasts meetup on July 31, 2019 at PwC Chicago. Thanks to Mary Yurkovic and Matt Turner for a very fun event!.
The boom in Xaas and the knowledge graphAlan Morrison
The document discusses the growing importance of digital twins, knowledge graphs, and data-centric approaches to managing large, diverse datasets. It notes that current methods often struggle to integrate and contextualize data at scale. Effective digital twins and AI require integrated, disambiguated data flowing to where it's needed. Knowledge graphs are presented as a way to achieve this by providing a unified semantic model that treats relationships as a first-class citizen. The document outlines the large and growing markets for knowledge graph technologies and discusses how a data-centric approach can help enterprises better leverage emerging technologies.
This document discusses the importance of data fluency skills in the 21st century. It defines key terms like data science, machine learning, data literacy, and statistical literacy. While these fields require extensive training, the document argues that domain expertise combined with basic data analysis skills can solve many problems. These basic skills include understanding data structures, using programming to interact with data, and exploratory data analysis through visualization. The data analysis process involves defining problems, collecting and preparing data, visualization and modeling, and communicating results. RStudio is presented as a tool that can support the entire data analysis process within a single integrated development environment.
The document discusses big data analytics. It begins by defining big data as large datasets that are difficult to capture, store, manage and analyze using traditional database management tools. It notes that big data is characterized by the three V's - volume, variety and velocity. The document then covers topics such as unstructured data, trends in data storage, and examples of big data in industries like digital marketing, finance and healthcare.
The document discusses opportunities for using big data in statistics. It describes how large amounts of digital data are being generated daily and how traditional tools cannot handle this volume of data. Significant knowledge is hidden in big data that can help address important issues. The document outlines how statistics play a key role in economic and political decisions and proposes using big data, such as telecom data, as a new source for statistics to enrich decision making. This would provide a low-cost, endless source of data. The document advocates designing systems to support various analysis techniques and tailoring approaches to specific domains using open standards.
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
Data-centric design and the knowledge graphAlan Morrison
The #knowledgegraph--smart data that can describe your business and its domains--is now eating software. We won't be able to scale AI or other emerging tech without knowledge graphs, because those techs all require a transformed data foundation, large-scale integration, and shared data infrastructure.
Key to knowledge graphs are #semantics, #graphdatabase technology and a Tinker Toy-style approach to adding the missing verbs (which provide connections and context) back into your data. A knowledge graph foundation provides a means of contextualizing business domains, your content and other data, for #AI at scale.
This is from a talk I gave at the Data Centric Design for SMART DATA & CONTENT Enthusiasts meetup on July 31, 2019 at PwC Chicago. Thanks to Mary Yurkovic and Matt Turner for a very fun event!.
The boom in Xaas and the knowledge graphAlan Morrison
The document discusses the growing importance of digital twins, knowledge graphs, and data-centric approaches to managing large, diverse datasets. It notes that current methods often struggle to integrate and contextualize data at scale. Effective digital twins and AI require integrated, disambiguated data flowing to where it's needed. Knowledge graphs are presented as a way to achieve this by providing a unified semantic model that treats relationships as a first-class citizen. The document outlines the large and growing markets for knowledge graph technologies and discusses how a data-centric approach can help enterprises better leverage emerging technologies.
This document discusses the importance of data fluency skills in the 21st century. It defines key terms like data science, machine learning, data literacy, and statistical literacy. While these fields require extensive training, the document argues that domain expertise combined with basic data analysis skills can solve many problems. These basic skills include understanding data structures, using programming to interact with data, and exploratory data analysis through visualization. The data analysis process involves defining problems, collecting and preparing data, visualization and modeling, and communicating results. RStudio is presented as a tool that can support the entire data analysis process within a single integrated development environment.
The document discusses big data analytics. It begins by defining big data as large datasets that are difficult to capture, store, manage and analyze using traditional database management tools. It notes that big data is characterized by the three V's - volume, variety and velocity. The document then covers topics such as unstructured data, trends in data storage, and examples of big data in industries like digital marketing, finance and healthcare.
The document discusses opportunities for using big data in statistics. It describes how large amounts of digital data are being generated daily and how traditional tools cannot handle this volume of data. Significant knowledge is hidden in big data that can help address important issues. The document outlines how statistics play a key role in economic and political decisions and proposes using big data, such as telecom data, as a new source for statistics to enrich decision making. This would provide a low-cost, endless source of data. The document advocates designing systems to support various analysis techniques and tailoring approaches to specific domains using open standards.
This document summarizes a presentation about operationalizing linked data to transform industries using a multi-model approach. It discusses the importance of data and challenges with traditional data approaches. It promotes using linked data and semantic techniques to create flexible, contextual data layers that can be used across business units. Examples are provided of companies using these approaches for regulatory compliance, integrated digital delivery for auto repair, and open data sharing without data silos.
Neo4j GraphTalk Copenhagen - Next Generation Solutions using Neo4j Neo4j
This document discusses how Neo4j can be used to build next generation solutions. It begins by discussing how Neo4j enables graph-based solutions that provide agility, intuitiveness, and high performance for connected data scenarios. It then provides examples of using Neo4j for fraud detection and recommendation engines. For fraud detection, it explains how Neo4j allows for connected analysis across channels to detect complex fraud patterns that traditional discrete analysis cannot. It also discusses how Neo4j fits into environments and provides an example fraud solution architecture. Finally, it summarizes the benefits Neo4j provides for building powerful recommendation engines.
Data Science Salon: Digital Transformation: The Data Science CatalystFormulatedby
This document discusses how data science can catalyze digital transformation. It begins by defining data science and digital transformation, noting that digital transformation provides the foundation for data science enablement. Several challenges of digital transformation are outlined, including increasing competition, changing consumer behavior, and legacy technical and cultural issues. The presentation argues that data science can help address these challenges by leveraging customer data to personalize engagement across channels. Specific data science techniques are described, such as propensity modeling, content personalization, social media interaction analysis, text analysis, and image/video analysis. Current limitations of these approaches are acknowledged, and the document concludes by emphasizing the relationship between data science and digital transformation.
This document discusses best practices for big data analytics projects. It begins by defining big data and explaining that while gaining insights from large and diverse data sets is desirable, operationalizing big data analytics can be complex. It emphasizes understanding an organization's unique needs and challenges before selecting technologies. The document also explores how in-memory processing can help speed up analysis by reducing data transfer times, but only if the insights are integrated into decision-making processes.
Big Data : From HindSight to Insight to ForesightSunil Ranka
When it comes to Analytics and Reporting , There is a fine line between HindSight to Insight to Foresight . With the evolution of BigData technology, there is a need in deriving value out of the larger datasets, not available in the past. Even before we can start using the new shiny technologies, there is a need of understanding what is categorized as reporting or business intelligence or Big Data and Analytics. Based on my experience, people struggle to distinguish between reporting, Analytics, and Business Intelligence.
Why Everything You Know About bigdata Is A LieSunil Ranka
As a big data technologist, you can bet that you have heard it all: every crazy claim, myth, and outright lie about what big data is and what it isn't that you can imagine, and probably a few that you can't.If your company has a big data initiative or is considering one, you should be aware of these false statements and the reasons why they are wrong.
This document provides an overview of blockchain and initial coin offerings (ICOs). It discusses common myths about blockchain, such as the ideas that blockchain is just for Bitcoin, proof of work is the only consensus method, and that all data needs to be stored on a blockchain. The document also covers blockchain use cases, validation of blockchains, and the benefits and challenges of ICOs as a method of fundraising. It aims to demystify blockchain and provide essential information about this emerging technology.
1. The document discusses various applications and uses of big data across different domains like government, healthcare, transportation, and more. It also covers big data techniques, platforms, analytics capabilities, and challenges.
2. Key topics covered include how cities can use mobile apps to get real-time road bump data, how insurance companies can price policies more granularly, and how companies can better hire and retain employees through recruiting analytics.
3. The summary also mentions concepts like data science, decision making, data types, use cases, capabilities, challenges, and roles like data scientists that are important in the big data field.
THIRUVANANTHAPURAM, JULY 19:
Marlabs, a Bangalore-based provider of IT services, is sponsoring a ‘Business Intelligence Technology’ conference at the Thiruvananthapuram Technopark on Friday.
The event will focus on emerging trends in Business Intelligence (BI) Technology, a Marlabs spokesman said.
It will feature eminent speakers from leading information technology companies including Marlabs, Infosys, UST Global, NeST and Kreara.
The conference will discuss latest developments in emerging BI areas such as predictive analytics, Big Data, mobile BI, social BI and advanced visualisations. It will also highlight the growing job opportunities for newly graduated software professionals in the Tier II and Tier III cities.
The document discusses how open data and big data can be used to create value through new business models and transformation. It provides examples of how Socrata helped organizations unlock value from their data through open data strategies like interactive data experiences, APIs, custom apps, and data visualization. The use of open data APIs and a cloud-based infrastructure are presented as best practices for enabling developers and businesses to access and reuse organizational data.
BlueBrain Nexus Technical IntroductionBogdan Roman
BlueBrain Nexus is a data management platform that enables modeling of data from different domains according to FAIR principles. It uses semantic web technologies like JSON-LD and SHACL to describe, constrain, relate, and evolve data models over time. Nexus treats provenance as a first class citizen and provides semantic search, publishing, and integration capabilities for domain agnostic and interoperable data management.
Beyond the Classroom consists of events, workshops and presentations meant to introduce Computer Science students to learning opportunities in addition to their regular classroom experiences. Beyond the Classroom events are free and open to all NHCC CSci students.
This presentation is about Big Data, how it changes the traditional data landscape, how different companies are using it, and which skills are in demand.
A Connections-first Approach to Supply Chain OptimizationNeo4j
Neo4j is a graph database platform for connected data. The document introduces Neo4j and discusses how connected data and relationships between data are increasingly important for business value. It provides examples of how Neo4j is used by organizations for applications like fraud detection, personalization, and network analysis. The document also summarizes Neo4j's capabilities like real-time transaction processing, analytics, and visualization and highlights its native graph architecture and performance advantages over traditional databases. Finally, it briefly describes Neo4j's key architecture components and how it can be used for common data architecture patterns.
Big data is delivering significant value to organizations that complete projects according to a survey. The vast majority (92%) of users are satisfied with business outcomes and feel their implementation meets needs. Larger companies see big data as more important and are more likely to benefit from initial implementations. While talent shortage poses challenges, successful users leverage external resources. Users see big data as disruptive and potentially transformational, with 89% believing it will revolutionize business as the internet did.
Kick Off – Graphs: The Fuel Behind Innovation and Transformation in Every FieldNeo4j
The document introduces Neo4j, a graph database company. It discusses how graphs can help harness relationships in data to drive business value and decisions. Neo4j uses graphs to power applications in domains like customer experience, fraud prevention, operations optimization, and more. The document provides examples of how Neo4j customers like the US Army have used the Neo4j graph platform to solve complex problems and accelerate innovation.
This document discusses data science and data scientists. It defines data science as using scientific methods and processes to extract knowledge and insights from structured and unstructured data. Data scientists are analytical experts who use technical skills and curiosity to solve complex problems by straddling both business and IT. They have skills in mathematics, technology, and business strategy. As data has become more valuable, data scientist roles have evolved from statisticians and analysts to help organizations gain insights from large data sources. Managers should learn to identify data science talent to make their organizations more productive by adding data-driven insights.
Minne analytics presentation 2018 12 03 final compressedBonnie Holub
A large transportation company needed help optimizing their transportation model to reduce costs. Teradata developed a transportation optimization model and user interface tool that takes forecasted volumes and determines the most optimal transportation modes and routes to deliver products to customers while considering capacities, constraints, and business rules. The tool selects the lowest cost solutions for each material/customer pair and allows users to conduct "what-if" analysis of scenarios to further reduce total costs.
Enabling data scientists within an enterprise requires a well-thought out approach from an organization, technology, and business results perspective. In this talk, Tim and Hussain will share common pitfalls to data science enablement in the enterprise and provide their recommendations to avoid them. Taking an example, actionable use case from the financial services industry, they will focus on how Anaconda plays a pivotal role in setting up big data infrastructure, integrating data science experimentation and production environments, and deploying insights to production. Along the way, they will highlight opportunities for leveraging open source and unleashing data science teams while meeting regulatory and compliance challenges.
This document provides an overview of big data and big data analytics. It defines big data as large, complex datasets that grow quickly in volume and variety. Big data analytics involves examining these large datasets to find patterns and useful information. The challenges of big data include increased storage needs and handling diverse data formats. Hadoop is a framework that allows distributed processing of big data across clusters of computers. Common big data analytics tools include MapReduce, Spark, HBase and Hive. The benefits of big data analytics include improved decision making, customer service and efficiency.
Data Architecture Strategies Webinar: Emerging Trends in Data Architecture – ...DATAVERSITY
A robust data architecture is at the core what’s driving today’s innovative, data-driven organizations. From AI to machine learning to Big Data – a strong data architecture is needed in order to be successful, and core fundamentals such as data quality, metadata management, and efficient data storage are more critical than ever.
With the vast array of new technologies available to support these trends, how do you make sense of it all? Our panel of experts will offer their perspectives on how the latest trends in data architecture can support your organization’s data-driven goals.
This document provides an overview and agenda for building an analytics capability. It discusses key topics such as:
- The importance of big data and analytics for business decisions
- Building an analytics capability requires the right people, processes, and technology
- Companies can build capabilities internally, outsource work, or use a hybrid approach
- When outsourcing analytics work, firms need to consider issues like vendor skills, data protection, and intellectual property ownership
This document summarizes a presentation about operationalizing linked data to transform industries using a multi-model approach. It discusses the importance of data and challenges with traditional data approaches. It promotes using linked data and semantic techniques to create flexible, contextual data layers that can be used across business units. Examples are provided of companies using these approaches for regulatory compliance, integrated digital delivery for auto repair, and open data sharing without data silos.
Neo4j GraphTalk Copenhagen - Next Generation Solutions using Neo4j Neo4j
This document discusses how Neo4j can be used to build next generation solutions. It begins by discussing how Neo4j enables graph-based solutions that provide agility, intuitiveness, and high performance for connected data scenarios. It then provides examples of using Neo4j for fraud detection and recommendation engines. For fraud detection, it explains how Neo4j allows for connected analysis across channels to detect complex fraud patterns that traditional discrete analysis cannot. It also discusses how Neo4j fits into environments and provides an example fraud solution architecture. Finally, it summarizes the benefits Neo4j provides for building powerful recommendation engines.
Data Science Salon: Digital Transformation: The Data Science CatalystFormulatedby
This document discusses how data science can catalyze digital transformation. It begins by defining data science and digital transformation, noting that digital transformation provides the foundation for data science enablement. Several challenges of digital transformation are outlined, including increasing competition, changing consumer behavior, and legacy technical and cultural issues. The presentation argues that data science can help address these challenges by leveraging customer data to personalize engagement across channels. Specific data science techniques are described, such as propensity modeling, content personalization, social media interaction analysis, text analysis, and image/video analysis. Current limitations of these approaches are acknowledged, and the document concludes by emphasizing the relationship between data science and digital transformation.
This document discusses best practices for big data analytics projects. It begins by defining big data and explaining that while gaining insights from large and diverse data sets is desirable, operationalizing big data analytics can be complex. It emphasizes understanding an organization's unique needs and challenges before selecting technologies. The document also explores how in-memory processing can help speed up analysis by reducing data transfer times, but only if the insights are integrated into decision-making processes.
Big Data : From HindSight to Insight to ForesightSunil Ranka
When it comes to Analytics and Reporting , There is a fine line between HindSight to Insight to Foresight . With the evolution of BigData technology, there is a need in deriving value out of the larger datasets, not available in the past. Even before we can start using the new shiny technologies, there is a need of understanding what is categorized as reporting or business intelligence or Big Data and Analytics. Based on my experience, people struggle to distinguish between reporting, Analytics, and Business Intelligence.
Why Everything You Know About bigdata Is A LieSunil Ranka
As a big data technologist, you can bet that you have heard it all: every crazy claim, myth, and outright lie about what big data is and what it isn't that you can imagine, and probably a few that you can't.If your company has a big data initiative or is considering one, you should be aware of these false statements and the reasons why they are wrong.
This document provides an overview of blockchain and initial coin offerings (ICOs). It discusses common myths about blockchain, such as the ideas that blockchain is just for Bitcoin, proof of work is the only consensus method, and that all data needs to be stored on a blockchain. The document also covers blockchain use cases, validation of blockchains, and the benefits and challenges of ICOs as a method of fundraising. It aims to demystify blockchain and provide essential information about this emerging technology.
1. The document discusses various applications and uses of big data across different domains like government, healthcare, transportation, and more. It also covers big data techniques, platforms, analytics capabilities, and challenges.
2. Key topics covered include how cities can use mobile apps to get real-time road bump data, how insurance companies can price policies more granularly, and how companies can better hire and retain employees through recruiting analytics.
3. The summary also mentions concepts like data science, decision making, data types, use cases, capabilities, challenges, and roles like data scientists that are important in the big data field.
THIRUVANANTHAPURAM, JULY 19:
Marlabs, a Bangalore-based provider of IT services, is sponsoring a ‘Business Intelligence Technology’ conference at the Thiruvananthapuram Technopark on Friday.
The event will focus on emerging trends in Business Intelligence (BI) Technology, a Marlabs spokesman said.
It will feature eminent speakers from leading information technology companies including Marlabs, Infosys, UST Global, NeST and Kreara.
The conference will discuss latest developments in emerging BI areas such as predictive analytics, Big Data, mobile BI, social BI and advanced visualisations. It will also highlight the growing job opportunities for newly graduated software professionals in the Tier II and Tier III cities.
The document discusses how open data and big data can be used to create value through new business models and transformation. It provides examples of how Socrata helped organizations unlock value from their data through open data strategies like interactive data experiences, APIs, custom apps, and data visualization. The use of open data APIs and a cloud-based infrastructure are presented as best practices for enabling developers and businesses to access and reuse organizational data.
BlueBrain Nexus Technical IntroductionBogdan Roman
BlueBrain Nexus is a data management platform that enables modeling of data from different domains according to FAIR principles. It uses semantic web technologies like JSON-LD and SHACL to describe, constrain, relate, and evolve data models over time. Nexus treats provenance as a first class citizen and provides semantic search, publishing, and integration capabilities for domain agnostic and interoperable data management.
Beyond the Classroom consists of events, workshops and presentations meant to introduce Computer Science students to learning opportunities in addition to their regular classroom experiences. Beyond the Classroom events are free and open to all NHCC CSci students.
This presentation is about Big Data, how it changes the traditional data landscape, how different companies are using it, and which skills are in demand.
A Connections-first Approach to Supply Chain OptimizationNeo4j
Neo4j is a graph database platform for connected data. The document introduces Neo4j and discusses how connected data and relationships between data are increasingly important for business value. It provides examples of how Neo4j is used by organizations for applications like fraud detection, personalization, and network analysis. The document also summarizes Neo4j's capabilities like real-time transaction processing, analytics, and visualization and highlights its native graph architecture and performance advantages over traditional databases. Finally, it briefly describes Neo4j's key architecture components and how it can be used for common data architecture patterns.
Big data is delivering significant value to organizations that complete projects according to a survey. The vast majority (92%) of users are satisfied with business outcomes and feel their implementation meets needs. Larger companies see big data as more important and are more likely to benefit from initial implementations. While talent shortage poses challenges, successful users leverage external resources. Users see big data as disruptive and potentially transformational, with 89% believing it will revolutionize business as the internet did.
Kick Off – Graphs: The Fuel Behind Innovation and Transformation in Every FieldNeo4j
The document introduces Neo4j, a graph database company. It discusses how graphs can help harness relationships in data to drive business value and decisions. Neo4j uses graphs to power applications in domains like customer experience, fraud prevention, operations optimization, and more. The document provides examples of how Neo4j customers like the US Army have used the Neo4j graph platform to solve complex problems and accelerate innovation.
This document discusses data science and data scientists. It defines data science as using scientific methods and processes to extract knowledge and insights from structured and unstructured data. Data scientists are analytical experts who use technical skills and curiosity to solve complex problems by straddling both business and IT. They have skills in mathematics, technology, and business strategy. As data has become more valuable, data scientist roles have evolved from statisticians and analysts to help organizations gain insights from large data sources. Managers should learn to identify data science talent to make their organizations more productive by adding data-driven insights.
Minne analytics presentation 2018 12 03 final compressedBonnie Holub
A large transportation company needed help optimizing their transportation model to reduce costs. Teradata developed a transportation optimization model and user interface tool that takes forecasted volumes and determines the most optimal transportation modes and routes to deliver products to customers while considering capacities, constraints, and business rules. The tool selects the lowest cost solutions for each material/customer pair and allows users to conduct "what-if" analysis of scenarios to further reduce total costs.
Enabling data scientists within an enterprise requires a well-thought out approach from an organization, technology, and business results perspective. In this talk, Tim and Hussain will share common pitfalls to data science enablement in the enterprise and provide their recommendations to avoid them. Taking an example, actionable use case from the financial services industry, they will focus on how Anaconda plays a pivotal role in setting up big data infrastructure, integrating data science experimentation and production environments, and deploying insights to production. Along the way, they will highlight opportunities for leveraging open source and unleashing data science teams while meeting regulatory and compliance challenges.
This document provides an overview of big data and big data analytics. It defines big data as large, complex datasets that grow quickly in volume and variety. Big data analytics involves examining these large datasets to find patterns and useful information. The challenges of big data include increased storage needs and handling diverse data formats. Hadoop is a framework that allows distributed processing of big data across clusters of computers. Common big data analytics tools include MapReduce, Spark, HBase and Hive. The benefits of big data analytics include improved decision making, customer service and efficiency.
Data Architecture Strategies Webinar: Emerging Trends in Data Architecture – ...DATAVERSITY
A robust data architecture is at the core what’s driving today’s innovative, data-driven organizations. From AI to machine learning to Big Data – a strong data architecture is needed in order to be successful, and core fundamentals such as data quality, metadata management, and efficient data storage are more critical than ever.
With the vast array of new technologies available to support these trends, how do you make sense of it all? Our panel of experts will offer their perspectives on how the latest trends in data architecture can support your organization’s data-driven goals.
This document provides an overview and agenda for building an analytics capability. It discusses key topics such as:
- The importance of big data and analytics for business decisions
- Building an analytics capability requires the right people, processes, and technology
- Companies can build capabilities internally, outsource work, or use a hybrid approach
- When outsourcing analytics work, firms need to consider issues like vendor skills, data protection, and intellectual property ownership
This document discusses data science career paths and the role of a data scientist. It defines data science as the scientific process of transforming data into insights to make better decisions. Data scientists are skilled at statistics, software engineering, machine learning, and communicating findings. The document outlines common data science career paths including roles in fraud detection analyzing social media analytics. It also lists important skills for data scientists such as data mining, machine learning, statistics, visualization, programming, and working with big data. Finally, it provides an example of tasks a data scientist might complete in a typical day.
Data and Analytics Career Paths, Presented at IEEE LYC'19.
About Speaker:
Ahmed Amr is a Data/Analytics Engineer at Rubikal, where he leads, develops, and creates daily data/analytics operations, which includes data ingestion , data streaming, data warehousing, and analytical dashboards. Ahmed is graduated from Computer Engineering Department, Alexandria University; and he is currently pursuing his MSc degree in Computer Science, AAST. Professionally, Ahmed worked with Egyptian/US startups such as (Badr, Incorta, WhoKnows) to develop their data/analytics projects. Academically, Ahmed worked as a Teaching Assistant in CS department, AAST. Ahmed helps software companies to develop robust data engineering infrastructure, and powerful analytical insights.
References:
1) https://www.datacamp.com/community/tutorials/data-science-industry-infographic
2) Analytics: The real-world use of big data, IBM, Executive Report
Minne analytics presentation 2018 12 03 final compressedBonnie Holub
Monday was another great conference by MinneAnalytics! #MinneFRAMA was a great success with over 1,100 attendees at Science Museum of Minnesota. Alison Rempel Brown is a great host! A Teradata colleague told me that her post about my presentation "blew up" with hits and she got over 2K views, and 60+ likes. I'm proud to be a part of this great #datascience organization brining #machinelearning and #artificialintelligence #analytics to our #bigdata clients. If you want my slides, here they are.
As 2017 begins, we are seeing big data and data science communities engage with new tools that specifically cater to data scientists and data engineers who aren’t necessarily experts in these techniques. Given rapid technological advances, the question for companies now is how to integrate new data science capabilities into their operations and strategies—and position themselves in a world where analytics can upend entire industries. Leading companies are using their data science capabilities not only to improve their core operations but also to launch entirely new business models.
Come diventare data scientist - Si ringrazie per le slide Paolo Pellegrini, Senior Consultant presso P4I (Partners4Innovation) e referente di tutte le progettualità relative alle tematiche Data Science e Big Data Analytics. Owner del primo gruppo in Italia dedicato dai Data Scientist.
Big Data & Business Analytics: Understanding the MarketspaceBala Iyer
This document provides an overview of big data and business analytics. It discusses the growth of data and importance of analytics to businesses. The key topics covered include defining big data and data science, analyzing the analytics ecosystem and key players, examining use cases of analytics at companies like Target and Whirlpool, and providing recommendations for building an analytics capability and working with analytics vendors. The presentation emphasizes how data-driven decisions can improve business performance but also notes challenges to overcome like skills shortages and changing organizational culture.
Data Lake Architecture – Modern Strategies & ApproachesDATAVERSITY
Data Lake or Data Swamp? By now, we’ve likely all heard the comparison. Data Lake architectures have the opportunity to provide the ability to integrate vast amounts of disparate data across the organization for strategic business analytic value. But without a proper architecture and metadata management strategy in place, a Data Lake can quickly devolve into a swamp of information that is difficult to understand. This webinar will offer practical strategies to architect and manage your Data Lake in a way that optimizes its success.
Bigger and Better: Employing a Holistic Strategy for Big Data toward a Strong...IT Network marcus evans
Bigger and Better: Employing a Holistic Strategy for Big Data toward a Strong Value-Adding Proposition
by Patrick Hadley, Australian Bureau of Statistics at the Australian CIO Summit 2014
E content.1 - P.SENEKA II-MSC COMPUTER SCIENCE,BON SECOURS COLLEGE FOR WOMENsenekapseneka
The document discusses six key challenges in big data integration: 1) uncertainty in data management due to a wide range of tools, 2) a talent gap in finding people with big data skills, 3) getting data into big data structures, 4) syncing data from different sources, 5) extracting useful information from large datasets, and 6) additional challenges like integration costs, data volume and velocity, and data quality. It also provides more details on each challenge and discusses the evolution of big data and analytics in education.
Palestra sobre conceitos Big data no evento IDETI em SP. Aborda o que é Big data, debate alguns beneficios e desafios. Debate também o papel do CDO- Chief Data Officer.
This presentation was given at the festival of marketing 2014. How grown up is your analytics? This slide deck will help you understand what you need to achieve optimum business benefit from your data analytics.
Big Data Lecture given at the University of Balamand by Fady Sayah Digi Web Founder.
Why Big Data Now?
Types of Databases
The 4 Vs of Big Data
Big Data Challenges
Big Data & Marketing
Big Data Impact on Social Media
Big Data & Hospitality
Big Data Scalable systems
BIg Data and Higher Education
Big Data Success Stories
You can view the presentation on this link.
Abstract:
Big Data concern large-volume, complex, growing data sets with multiple, autonomous sources. With the fast development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. This paper presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective. This data-driven model involves demand-driven aggregation of information sources, mining and analysis, user interest modeling, and security and privacy considerations. We analyze the challenging issues in the data-driven model and also in the Big Data revolution.
DAS Slides: Self-Service Reporting and Data Prep – Benefits & RisksDATAVERSITY
As more organizations see the value of becoming data-driven, an increasing number of business stakeholders want to become more actively involved in the reporting and preparation of critical business data. Tools and technologies have evolved to support this desire, and the ability to manage and analyze vast amounts of disparate data has become more accessible than ever before. With this increased visibility and usage of data, the need for data quality, metadata context, lineage and audit, and other core fundamental best practices is greater than ever.
How can an effective architecture & governance model be created that supports both business agility, as well as long-term sustainability and risk reduction? Where do these responsibilities lie between business and IT stakeholders? Join our panel of experts as they discuss the latest best practices, architectures, and tools that support self-service reporting and data prep to maximize benefits while at the same time reducing risk.
How to get started in extracting business value from big data 1 of 2 oct 2013Jaime Nistal
This document discusses gaining competitive advantage through big data assets and investments. It begins by outlining some key questions boards ask about big data management. It then defines big data using the four V's - volume, variety, velocity and value. It discusses when and where big data provides value for companies. It outlines the types of internal and external data available, as well as the processes needed to extract value from big data. It provides examples of big data opportunities across various industries. Finally, it discusses three potential approaches to big data before concluding with contact information.
Data Architecture Best Practices for Today’s Rapidly Changing Data LandscapeDATAVERSITY
With the rise of the data-driven organization, the pace of innovation in data-centric technologies has been tremendous. New tools and techniques are emerging at an exponential rate, and it is difficult to keep track of the array of technological choices available to today’s data management professional.
At the same time, core fundamentals such as data quality and metadata management remain critical in order for organizations to obtain true business value from their data. This webinar will help demystify the options available: from data lake to data warehouse, to graph database, to NoSQL, and more, and how to integrate these new technologies with core architectural fundamentals that will help your organization benefit from the quick wins that are possible from these exciting technologies, while at the same time build a longer-term sustainable architecture that will support the inevitable change that will continue in the industry.
Algorithmic Systems Transparency and Accountability in Big Data & Cognitive EraNozha Boujemaa
1) Algorithmic systems are increasingly being used for decision support, but their lack of transparency and accountability can undermine trust.
2) Ensuring algorithmic transparency and developing methods for establishing accountability are important for building trust.
3) Responsible and ethical data management and analytics are needed to address potential biases in algorithmic systems and their real-world impacts.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Big data for situation awareness and decision making
1. Big Data for Situation Awareness and
Decision Making
Design issues and approaches
Prof. Dr. Paloma Diaz
Computer Science Department / Ins3tute of Financial Big Data
Universidad Carlos III de Madrid
@MPalomaD
pdp@inf.uc3m.es
2. Who am I?
¡ Full professor of CS and AI at Universidad Carlos II de
Madrid
@MpalomaD
pdp@inf.uc3m.es
¡UC3M
Young university in the South of Madrid
Committed to academic and research excellence and
internationalisation
¡Director of the Interactive Systems Research group
(DEILab)
www.dei.inf.uc3m.es
@MPalomaDBig data for situational awareness and decision making, 2018
6. @MPalomaDBig data for situational awareness and decision making, 2018
INTERACTION
Design interaction
experiences that match the
nature of the task, the
users involved and the
contexts of use
VISUALIZATION
Define approaches for
conveying information in a
way that improves decision
making
COLLABORATION
Design and develop collaborative
environments, trying to
understand how the adoption and
the use of technology influence
group behaviors and efficiency
LEARNING
Use different technologies and
interaction techniques to develop
useful, usable, and effectiveness
educational experiences
7. @MPalomaDBig data for situational awareness and decision making, 2018
VISION
IA > AI
Intelligence Amplification Artificial Intelligence
(Klinker, 2018)
DATA VISUALIZATION AS A WAY OF
AUGMENTING HUMAN CAPACITIES TO PROCESS INFORMATION AND
TAKE DECISIONS
NOT TO SUPPORT AUTOMATIC DECISION MAKING
8. @MPalomaDBig data for situational awareness and decision making, 2018
Decision Making and Situation Awareness
Design challenges in the era of Big Data and
Advanced Interaction
Examples of visualization design for the smart city
Conclusions
AGENDA
9. @MPalomaDBig data for situational awareness and decision making, 2018
Decision Making and Situation Awareness
Design challenges in the era of Big Data and
Advanced Interaction
Examples of visualization design for the smart city
Conclusions
AGENDA
10. @MPalomaDBig data for situational awareness and decision making, 2018
Is more data more information?
Is more information more knowledge?
Does more knowledge lead to take better
decisions?
?
11. @MPalomaDBig data for situational awareness and decision making, 2018
From data to wisdom
DATA
INFORMATION
KNOWLEDGE
WISDOM
VALUE
The DIKW pyramid (Rowley, 2007)
Meaning
Applicability
Value
Human Input
12. @MPalomaDBig data for situational awareness and decision making, 2018
Individual or collaborative process
Based on data, knowledge, experience,
skills, intuition…
Tacit & explicit knowledge (Nonaka,
2008)
DECISION MAKING
13. @MPalomaDBig data for situational awareness and decision making, 2018
SA is being aware of what is happening around you
and understanding what that information means to you
now and in the future
(Endsley et al, 2016)
SITUATION AWARENESS
14. @MPalomaDBig data for situational awareness and decision making, 2018
SITUATION AWARENESS
“is the perception of the elements in the environment
within a volume of time and space, the comprehension
of their meaning, and the projection of their status in the
near future” (Endsley, 1988)
Level 1: perception of the elements of the environment
Level 2: understanding of the current situation
Level 3: projection of future status
15. @MPalomaDBig data for situational awareness and decision making, 2018
SITUATION AWARENESS
Our SA is affected by our goals, preconceptions/biases,
expectations, abilities, experience, training, stress,
workload, complexity of the task…
Our SA influences our decision making process and the
actions we undertake
Systems can be designed to improve SA (individual) and
Activity Awareness (collective SA)
16. @MPalomaDBig data for situational awareness and decision making, 2018
Decision Making and Situation Awareness
Design challenges in the era of Big Data and
Advanced Interaction
Examples of visualization design for the smart city
Conclusions
AGENDA
17. @MPalomaDBig data for situational awareness and decision making, 2018
BIG DATA
Huge volumes
Generated almost in real time
Structured and unstructured data
Exhaustive
Fine-grained (as detailed as possible)
Relational
Flexible and scalable
18. @MPalomaDBig data for situational awareness and decision making, 2018
BIG DATA VS USEFUL INFORMATION
Data
DELUGE
Information
DEARTH
19. @MPalomaDBig data for situational awareness and decision making, 2018
BIG DATA DESIGN
Envisioning useful BIG DATA systems:
DATA
INFORMATION
KNOWLEDGE
WISDOM
VALUE
FROM DATA-CENTRIC TO USER-CENTRED
GOAL-ORIENTED DESIGN
20. @MPalomaDBig data for situational awareness and decision making, 2018
DESIGN RECOMMENDATIONS (I)
Design to meet the information tasks, goals and needs
• Focus first on the decision making process
• which are the big questions?
• who are the main actors?
• where, when and how are decisions taken?
• Analyse then which data are needed, how to find/sort/filter/
aggregate them
21. @MPalomaDBig data for situational awareness and decision making, 2018
DESIGN RECOMMENDATIONS (II)
To identify tasks, goals and needs, follow a user-centred design
Three main principles:
• Use technology to keep the user in control of the system and aware
of the situation
• Focus on tasks, goals and abilities not (necessarily) on user comments
• Put the stress on human information processing and decision making
22. @MPalomaDBig data for situational awareness and decision making, 2018
ADVANCED INTERACTION
New ways to interact with information (individually and
collectively, co-located and distributed)
Different devices provide different interaction affordances
23. @MPalomaDBig data for situational awareness and decision making, 2018
DATA
INFORMATION
KNOWLEDGE
WISDOM
VALUE
FROM DATA-CENTRIC TO USER-CENTRED
GOAL-ORIENTED DESIGN
BIG DATA & ADVANCED INTERACTION DESIGN
24. @MPalomaDBig data for situational awareness and decision making, 2018
BIG DATA & ADVANCED INTERACTION DESIGN
DATA
INFORMATION
KNOWLEDGE
WISDOM
VALUE
FROM DATA-CENTRIC TO USER-CENTRED
GOAL-ORIENTED DESIGN ADDING CONTEXTS OF USE
25. @MPalomaDBig data for situational awareness and decision making, 2018
DESIGN RECOMMENDATIONS (III)
Understand how do people interact with data to generate
knowledge and wisdom
26. @MPalomaDBig data for situational awareness and decision making, 2018
DESIGN RECOMMENDATIONS (IV)
Understand which devices and in which contexts information
processing is carried out
Understand first the problem: design research
Design for user acceptance
Take into account non-rational criteria:
Emotional and semantic design
27. @MPalomaDBig data for situational awareness and decision making, 2018
Design as a research process
A design process involves
• grounding—investigation to gain multiple perspectives
on a problem;
• ideation—generation of many possible different
solutions;
• iteration—cyclical process of refining concept with
increasing fidelity; and reflection
28. @MPalomaDBig data for situational awareness and decision making, 2018
“Design research implies an inquiry focused on producing a
contribution of knowledge… an intention to produce
knowledge and not the work to more immediately inform
the development of a commercial product” (Zimmerman
et al,2007)
“Design science research addresses important unsolved
problems in unique or innovative ways or solved problems
in a more efficient way” (Hevner et al, 2007)
Design as a research process
29. @MPalomaDBig data for situational awareness and decision making, 2018
Design as a research process
30. @MPalomaDBig data for situational awareness and decision making, 2018
Design for user acceptance
Acceptance is much more than usability or efficacy
Complex sociotechnological systems
Difference between data science and data engineering
Davis, F. D.; Bagozzi, R. P.; Warshaw, P. R. (1989), "User acceptance of computer technology: A
comparison of two theoretical models", Management Science 35: 982–1003
31. @MPalomaDBig data for situational awareness and decision making, 2018
Design approaches
Design to meet the information tasks, goals and needs —>
Design Research approach
Design iteratively taking into account that designing is
providing an efficient solution to a problem given a number
of constraints and resources —> Engineering approach
Design to meet the user capabilities (cognitive and
physiological) as well as the interaction affordances of the
context of use —> HCI approach
32. @MPalomaDBig data for situational awareness and decision making, 2018
Design techniques
Qualitative methods to analyse the problem (interviews,
focus groups, Delphi…)
Lean and iterative design (from wireframes to prototypes)
Participatory design and co-design
Generative design techniques to co-envision solutions
Qualitative methods to assess the results (evaluation trough
UTAUT, NASA-TLX…)
33. @MPalomaDBig data for situational awareness and decision making, 2018
Decision Making and Situation Awareness
Design challenges in the era of Big Data and
Advanced Interaction
Examples of visualization design for the smart city
Conclusions
AGENDA
34. @MPalomaDBig data for situational awareness and decision making, 2018
ENERGOS
DATA VISUALIZATION
FOR THE ELECTRICAL
SMART GRID
35. @MPalomaDBig data for situational awareness and decision making, 2018
Design Challenges:
• Massive data (streaming)
• Multiple sources of information
• Envisioning the future
Current situation:
• Use of tacit knowledge
• High learning curve
36. @MPalomaDBig data for situational awareness and decision making, 2018
?how to envision a solution for a
future information processing
situation?
37. @MPalomaDBig data for situational awareness and decision making, 2018
Analysing the problem applying a user-centred
approach
who do you ask when the problem doesn’t even
exist yet?
User-centred does not mean accepting the user comments
and suggestions, means focusing on the user needs and
abilities and on the information processing tasks (Endsley et al,
2003)
38. @MPalomaDBig data for situational awareness and decision making, 2018
Analysing the problem applying a user-centred
approach
Interviews and ethnographic studies to understand the
current situation
GTA, wireframes and mockups to envision solutions and test
Human Factors (skills, workload, communication, error
recovery, alarms, training)
Literature review to ground decisions
!
39. @MPalomaDBig data for situational awareness and decision making, 2018
New operation roles and design paradigms
• New roles and interaction possibilities
• Goal based-design
• Visual scalability and usability
• Situational and activity awareness
40. @MPalomaDBig data for situational awareness and decision making, 2018
Interaction environment
• Curved display used to support seamless exploration
and avoid losing context
Interface
• Coordinated goal-oriented views
• Avoid overlapping and loss of context
• Support communication and coordination among users
43. @MPalomaDBig data for situational awareness and decision making, 2018
emerCien
Current situation
• Citizens play a key role in major disasters reporting
information via SSNN and grassroots platforms
• They are first-first responders
• Scalability, lack of resources and trust hinder the
incorporation of citizen information/action
44. Current situation
• Participation goes beyond communication
• information/knowledge sharing
• information integration and sense making
• decision making and coordinated action
• Take profit from citizens’ social capital
• Machizukuri movement in Japan
emerCien
45. @MPalomaDBig data for situational awareness and decision making, 2018
?can we move from just
communication with citizens to
service co-production?
46. @MPalomaDBig data for situational awareness and decision making, 2018
Future situation
emerCien
47. @MPalomaDBig data for situational awareness and decision making, 2018
Design challenges
• Massive volume of heterogeneous data (including SSNN)
• Scalability, trust and empowerment issues
• from formal to substantive empowerment of citizens
and organisations
• Organizational and task constraints
• Multi-device environment
emerCien
48. @MPalomaDBig data for situational awareness and decision making, 2018
Extensive use of FG and studies to shape the problem
• 1st study in British Columbia and Washington State
• 2nd study in Spain
• EFG with operation center managers
Design models to analyse the solutions
• Theory of crowd capital
• Ecologies of participants
emerCien
50. @MPalomaDBig data for situational awareness and decision making, 2018
Ecologies of devices
• Applications to support different goals performed by
different participants in different contexts
• Coordination among applications and views
Semantic and adaptable visualizations
emerCien
52. @MPalomaDBig data for situational awareness and decision making, 2018
?which visualization works better to
make sense of the situation and
take informed action?
53. @MPalomaDBig data for situational awareness and decision making, 2018
PACE
Design challenges
• Identify affordable visualisation tools to support decision
makers in analysing citizen-generated information
• Identify the right visualisation to answer relevant
questions
• Explore the interaction affordances of different
interaction paradigms
54. @MPalomaDBig data for situational awareness and decision making, 2018
PACE
Semantic visualisation design approaches
• EFG with 20 experts to identify relevant questions in the
domain of EM
• Design of tool to use different visualisations
• Evaluation with non-expert users focusing on utility and
acceptance (UTAUT)
55. @MPalomaDBig data for situational awareness and decision making, 2018
PACE
Semantic Visualisation tool
56. @MPalomaDBig data for situational awareness and decision making, 2018
PACE
Immersive visualisation design approaches
• Exploratory design comparing different ways one
interacting with immersive data
57. @MPalomaDBig data for situational awareness and decision making, 2018
Decision Making and Situation Awareness
Design challenges in the era of Big Data and
Advanced Interaction
Examples of visualization design for the smart city
Conclusions
AGENDA
58. @MPalomaDBig data for situational awareness and decision making, 2018
CONCLUSIONS
Big data is much more than data analytics and DATA
The real breakthroughs will come by AUGMENTING human
capability to take decisions using data not by REPLACING
them
This is a wicked problem, exploratory, iterative, generative
and participatory approaches are required
60. @MPalomaDBig data for situational awareness and decision making, 2018
We now live in a world where information is
potentially unlimited. Information is cheap,
but meaning is expensive.
Where is the meaning? Only human beings
can tell you where it is. We’re extracting
meaning from our minds and our own lives
George Dyson
MESSAGE TO BRING HOME
61. Useful references
• Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human
factors, 37(1), 32-64.
• Endsley, M. R. , Bolté, B. and Jones, D.G. (2016). Designing for situation awareness: An
approach to user-centered design. CRC press.
• Nonaka, I. (2008). The knowledge-creating company. Harvard Business Review Press.
• Rowley, J. (2007). The wisdom hierarchy: representations of the DIKW hierarchy. Journal of
information science, 33(2), 163-180.
• Hevner, A., & Chatterjee, S. (2010). Design science research in information systems. In Design
research in information systems(pp. 9-22). Springer US.
• Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information
technology: Toward a unified view. MIS quarterly, 425-478.
• Rubio, S., Díaz, E., Martín, J., & Puente, J. M. (2004). Evaluation of subjective mental workload: A
comparison of SWAT, NASA-TLX, and workload profile methods. Applied Psychology, 53(1),
61-86.
@MPalomaDBig data for situational awareness and decision making, 2018
62. Related publications
• Onorati, T., Díaz, P., & Carrion, B. (2018). From social networks to emergency operation centers:
A semantic visualization approach. Future Generation Computer Systems.
• Romero-Gómez, R., and Díaz, P. "Towards a Design Pattern Language to Assist the Design of
Alarm Visualizations for Operating Control Systems." Digitally Supported Innovation. Springer.
249-264.
• Romero, R., Díez, D., Wittenburg, K., & Díaz, P. (2012, May). Envisioning grid vulnerabilities:
multi-dimensional visualization for electrical grid planning. In Proceedings of the International
Working Conference on Advanced Visual Interfaces (pp. 701-704). ACM.
• Herranz, S., Romero-Gómez, R., Díaz, P., & Onorati, T. (2014). Multi-view visualizations for
emergency communities of volunteers. Journal of Visual Languages & Computing, 25(6),
981-994.
• Díaz, P., Aedo, I., & Herranz, S. (2014, October). Citizen participation and social technologies:
exploring the perspective of emergency organizations. In International Conference on
Information Systems for Crisis Response and Management in Mediterranean Countries (pp.
85-97). Springer
• Gómez, R., Díez, D., Díaz P, Aedo, I. Situation Awareness-Oriented Alarm Visualizations: A next
Step in HSC Environments. GRAPP/IVAPP 2013: 483-488
• Santos, A., Zarraonandia, T., Díaz, P., & Aedo, I. (2018, May). A virtual reality map interface for
geographical information systems. In Proceedings of the 2018 International Conference on
Advanced Visual Interfaces (p. 83). ACM
@MPalomaDBig data for situational awareness and decision making, 2018
63. Prof. Dr. Paloma Diaz
DEI LAB - Ins3tute of Financial Big Data
Universidad Carlos III de Madrid
pdp@inf.uc3m.es