This document summarizes a presentation about operationalizing linked data to transform industries using a multi-model approach. It discusses the importance of data and challenges with traditional data approaches. It promotes using linked data and semantic techniques to create flexible, contextual data layers that can be used across business units. Examples are provided of companies using these approaches for regulatory compliance, integrated digital delivery for auto repair, and open data sharing without data silos.
THE GOOD, THE BAD, THE DATA - Artificial Intelligence and Robotic Process Aut...Ken O'Connor
Ronan Fitzpatrick, Director of Digital at PWC shares insights from PWC research on AI and Robotic Process Automation. Ronan explains that insight and trust in your data is pivotal to the successful use of Artificial Intelligence and Robotics.
Data-centric design and the knowledge graphAlan Morrison
The #knowledgegraph--smart data that can describe your business and its domains--is now eating software. We won't be able to scale AI or other emerging tech without knowledge graphs, because those techs all require a transformed data foundation, large-scale integration, and shared data infrastructure.
Key to knowledge graphs are #semantics, #graphdatabase technology and a Tinker Toy-style approach to adding the missing verbs (which provide connections and context) back into your data. A knowledge graph foundation provides a means of contextualizing business domains, your content and other data, for #AI at scale.
This is from a talk I gave at the Data Centric Design for SMART DATA & CONTENT Enthusiasts meetup on July 31, 2019 at PwC Chicago. Thanks to Mary Yurkovic and Matt Turner for a very fun event!.
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
THE GOOD, THE BAD, THE DATA - Artificial Intelligence and Robotic Process Aut...Ken O'Connor
Ronan Fitzpatrick, Director of Digital at PWC shares insights from PWC research on AI and Robotic Process Automation. Ronan explains that insight and trust in your data is pivotal to the successful use of Artificial Intelligence and Robotics.
Data-centric design and the knowledge graphAlan Morrison
The #knowledgegraph--smart data that can describe your business and its domains--is now eating software. We won't be able to scale AI or other emerging tech without knowledge graphs, because those techs all require a transformed data foundation, large-scale integration, and shared data infrastructure.
Key to knowledge graphs are #semantics, #graphdatabase technology and a Tinker Toy-style approach to adding the missing verbs (which provide connections and context) back into your data. A knowledge graph foundation provides a means of contextualizing business domains, your content and other data, for #AI at scale.
This is from a talk I gave at the Data Centric Design for SMART DATA & CONTENT Enthusiasts meetup on July 31, 2019 at PwC Chicago. Thanks to Mary Yurkovic and Matt Turner for a very fun event!.
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Data centric business and knowledge graph trendsAlan Morrison
The deck for my kickoff keynote at the Data-Centric Architecture Forum, February 3, 2020. Includes related data, content, and architecture definitions and fundamental explanations, knowledge graph trends, market outlook, transformation case studies and benefits of large-scale, cross-boundary integration/interoperation.
Data-Centric Business Transformation Using Knowledge GraphsAlan Morrison
From a talk at the Data Architecture Summit in Chicago in 2018--reviews digital transformation and what deep transformation really implies at the data layer. Cross-enterprise knowledge graphs are becoming feasible and can be a key enabler of deep transformation.
At Neo4j we believe that ‘Graphs Are Everywhere’. In this session, we’ll be looking specifically at graphs within the Financial Services industry. We’ll review the types of data that are typically available within a bank, illustrate the graphs can be formed from that data, and discuss the use cases that those graphs can enable and support.
The use cases presented will include Anti-Money Laundering and Fraud Detection and Prevention (including integration with AI and Machine Learning technologies), Regulatory Compliance (such as BCBS 239 and GDPR), Customer 360 View, Master Data Management, and Identity and Access Management.
Many players in the Financial Services industry already rely on Neo4j's graph database: such as Lending Club, the world's largest microservices credit marketplace, for Network and IT, the big German insurance company die Bayerische for graph-based search, Cerved for Master Data Management, Wobi for price comparison and real-time recommendation, or UBS for Identity and Access Management.
Abstract della presentazione di Fabio Rizzotto, IT Research & Consulting Director di IDC Italia, tenuta all’IDC Big Data Conference II, a Bologna il 19 novembre 2013
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
Data Virtualization: Fulfilling The Digital Transformation Requirement In Ban...Denodo
Watch full webinar here: https://bit.ly/3szm3PV
In the digital transformation era, banks need a single view of all their data and a way to establish security controls across the entire infrastructure. This can be achieved with Data Virtualization.
Banking institutions need to update their legacy systems and implement strategies and services that will transform them into digital financial organizations.
They need agile access to information that can be leveraged to make timely business decisions, yet still fulfill the regulatory requirements. In the digital transformation era, banks need a single view of all their data and a way to establish security controls across the entire infrastructure.
This webinar presents:
- How data virtualization can help update and modernize data architectures,
- Success stories of financial companies that already use this technology to differentiate themselves from the competition, optimize processes, and create new business opportunities through more agile data management.
Big data course | big data training | big data classesNaviWalker
In your world of digitization, Data is an essential source. Businesses in various fields use this Data to get important ideas for their growth. Eventually, this creates a sense of urgency to start learning Big Data. By doing so, you can stay productive and solve real world problems.
Big Data helps to derive important business decisions. Furthermore, successful Big Data processing in huge industrial sectors has taught important lessons on various Big Data concepts.
Big Data training with various Big Data Analytics courses will help you master Data Analysis. In the present world, you have ample scope of becoming a Big Data Scientist. And also getting other Big Data job roles.
By definition, “big data” involves large volumes of diverse data sources.
Considering all the data that your activities generate and that 99% of this data is irrelevant “noise,” business users and stakeholders have to struggle to understand your company’s status.
See how a business perspective on your big, small or just complex data will generate business value.
How to Solve 4 Common Challenges of Legacy Information ManagementNuxeo
After 20 years of Enterprise Content Management (ECM), businesses still face many of the same challenges with finding and managing information. Many modern organizations find themselves at the same fork in the road: either continue down the path they’re on, applying band-aid after band-aid to outdated ECM systems, or choose a new path with a modern Content Services Platform.
Join Chris McLaughlin, CMO and CPO of Nuxeo, as he examines four common business challenges that these legacy ECM systems pose, and how they can be addressed with a more modern approach. He will share compelling stories from customers that have chosen a different path, and best practices for Information Management professionals to help them along their way.
You'll come away from the webinar understanding:
- Why ECM still poses business challenges
- How a platform-based approach can solve modern content challenges
- Strategies to avoid the risks of modernization by future-proofing your organizational infrastructure
‘Edge’ Technologies: a new language of innovationDXC Eclipse
Microsoft Dynamics 365: Continue Your Transformation Journey.
‘Edge’ Technologies: a new language of innovation .
We will demystify the new language of ‘Edge’ Technologies – Common Data Service, Power Apps, Flow, Cortana Intelligence Suite, BOT Framework.
Presented by Henrik Mozart - Senior Technology Specialist, DXC Eclipse
Data centric business and knowledge graph trendsAlan Morrison
The deck for my kickoff keynote at the Data-Centric Architecture Forum, February 3, 2020. Includes related data, content, and architecture definitions and fundamental explanations, knowledge graph trends, market outlook, transformation case studies and benefits of large-scale, cross-boundary integration/interoperation.
Data-Centric Business Transformation Using Knowledge GraphsAlan Morrison
From a talk at the Data Architecture Summit in Chicago in 2018--reviews digital transformation and what deep transformation really implies at the data layer. Cross-enterprise knowledge graphs are becoming feasible and can be a key enabler of deep transformation.
At Neo4j we believe that ‘Graphs Are Everywhere’. In this session, we’ll be looking specifically at graphs within the Financial Services industry. We’ll review the types of data that are typically available within a bank, illustrate the graphs can be formed from that data, and discuss the use cases that those graphs can enable and support.
The use cases presented will include Anti-Money Laundering and Fraud Detection and Prevention (including integration with AI and Machine Learning technologies), Regulatory Compliance (such as BCBS 239 and GDPR), Customer 360 View, Master Data Management, and Identity and Access Management.
Many players in the Financial Services industry already rely on Neo4j's graph database: such as Lending Club, the world's largest microservices credit marketplace, for Network and IT, the big German insurance company die Bayerische for graph-based search, Cerved for Master Data Management, Wobi for price comparison and real-time recommendation, or UBS for Identity and Access Management.
Abstract della presentazione di Fabio Rizzotto, IT Research & Consulting Director di IDC Italia, tenuta all’IDC Big Data Conference II, a Bologna il 19 novembre 2013
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
Data Virtualization: Fulfilling The Digital Transformation Requirement In Ban...Denodo
Watch full webinar here: https://bit.ly/3szm3PV
In the digital transformation era, banks need a single view of all their data and a way to establish security controls across the entire infrastructure. This can be achieved with Data Virtualization.
Banking institutions need to update their legacy systems and implement strategies and services that will transform them into digital financial organizations.
They need agile access to information that can be leveraged to make timely business decisions, yet still fulfill the regulatory requirements. In the digital transformation era, banks need a single view of all their data and a way to establish security controls across the entire infrastructure.
This webinar presents:
- How data virtualization can help update and modernize data architectures,
- Success stories of financial companies that already use this technology to differentiate themselves from the competition, optimize processes, and create new business opportunities through more agile data management.
Big data course | big data training | big data classesNaviWalker
In your world of digitization, Data is an essential source. Businesses in various fields use this Data to get important ideas for their growth. Eventually, this creates a sense of urgency to start learning Big Data. By doing so, you can stay productive and solve real world problems.
Big Data helps to derive important business decisions. Furthermore, successful Big Data processing in huge industrial sectors has taught important lessons on various Big Data concepts.
Big Data training with various Big Data Analytics courses will help you master Data Analysis. In the present world, you have ample scope of becoming a Big Data Scientist. And also getting other Big Data job roles.
By definition, “big data” involves large volumes of diverse data sources.
Considering all the data that your activities generate and that 99% of this data is irrelevant “noise,” business users and stakeholders have to struggle to understand your company’s status.
See how a business perspective on your big, small or just complex data will generate business value.
How to Solve 4 Common Challenges of Legacy Information ManagementNuxeo
After 20 years of Enterprise Content Management (ECM), businesses still face many of the same challenges with finding and managing information. Many modern organizations find themselves at the same fork in the road: either continue down the path they’re on, applying band-aid after band-aid to outdated ECM systems, or choose a new path with a modern Content Services Platform.
Join Chris McLaughlin, CMO and CPO of Nuxeo, as he examines four common business challenges that these legacy ECM systems pose, and how they can be addressed with a more modern approach. He will share compelling stories from customers that have chosen a different path, and best practices for Information Management professionals to help them along their way.
You'll come away from the webinar understanding:
- Why ECM still poses business challenges
- How a platform-based approach can solve modern content challenges
- Strategies to avoid the risks of modernization by future-proofing your organizational infrastructure
‘Edge’ Technologies: a new language of innovationDXC Eclipse
Microsoft Dynamics 365: Continue Your Transformation Journey.
‘Edge’ Technologies: a new language of innovation .
We will demystify the new language of ‘Edge’ Technologies – Common Data Service, Power Apps, Flow, Cortana Intelligence Suite, BOT Framework.
Presented by Henrik Mozart - Senior Technology Specialist, DXC Eclipse
Madison Park Group is a strategic M&A and capital raising advisor to the global software economy. In November 2020, MPG formed a strategic partnership with Ascentage Group (“Ascentage”), a business development and M&A advisory firm addressing the strategic and tactical business needs of growth-stage companies developing construction technology and software for the built environment. Ascentage’s domain expertise in design, construction, infrastructure, and asset management technologies further enhances MPG’s rapidly expanding industrial technology advisory practice.
The firm’s principals have sat on both sides of the table, advising disruptors, consolidators, and incumbents as they navigate strategic initiatives. Industry leaders trust Madison Park Group’s experience in the marketplace.
Madison Park Group actively tracks the broader Engineering, Manufacturing, and Supply Chain Software market and has dedicated significant attention to the Industrial IOT technology landscape. We are particularly interested in the rapid development of several trends:
• Advancements in high-speed data ingestion and AI enabling highly accurate assessments of machine performance and reducing unnecessary downtimes
• Low-code/No-code platforms and ready-to-use APIs simplifying software development processes and helping drive a wider range of IIoT uses cases and implementations
• Integration of sensors into wearable devices and industrial assets in the manufacturing, healthcare and construction sectors, amongst others, creating real-time intelligence and novel asset tracking insights for operational efficiency and improved outcomes
Operationalize Your Data and Lead Your Business TransformationMatt Turner
Data is a critical asset for every business, so why is it so hard to get value from that data? Taking a data-centric approach and rethinking the enterprise stack and taking a new approach hold the answers and are the foundations for digital transformation.
How to Transform IT Operations with Machine Learning - Apply ContextDevOps.com
IT operations is complex. And it generates a ton of data. So much data, that it can quickly overwhelm your IT Operations team’s ability to process it. Is more data really better? Or is it noise that inhibits action? While Machine Learning promises to make sense of it all, it requires complete, accurate, and unfragmented data. Otherwise, the old adage “Garbage in, Garbage out” prevails.
Startup pitch presented by co-founder and CEO Jaco Els. Cubitic offers a predictive analytics platform that allows developers to build custom solutions for analytics and visualisation on top of a machine learning engine.
The Briefing Room with Dr. Robin Bloor and RedPoint Global
Live Webcast Jan. 13, 2015
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=c847c54220dfb80841f3e0c63664fd08
Context is king in the realm of Big Data. With enough perspective on a customer or prospect, organizations can fine-tune their offerings in game-changing ways. Today's cutting-edge companies are viewing their customers within the context of a decade or more of interactions, and across multiple channels. How so? Real-time integration with social media and other customer channels can now result in actionable insights with serious potential.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor, as he describes the changing landscape of data flow, and how that impacts enterprise responsiveness. He'll be briefed by George Corugedo of RedPoint Global, who will explain how companies are leveraging Hadoop's YARN architecture to deliver a whole new array of highly responsive, data-driven enterprise applications. He'll demonstrate how RedPoint's platform running inside Hadoop can enable a wide range of both real-time and strategic data management functionality, all of which can be applied to any number of critical business processes.
Visit InsideAnalysis.com for more information.
Presentations from the IoT track at the Connections Summit, co-hosted by AIM, NFC Forum, and RAIN on March 7th, 2018. The Connections Summit was a one-day event featuring a series of speaker-led sessions that focused on how NFC, RFID, and AIDC make the world more connected. The full agenda included keynote presentations, panels, and IoT, Retail & Smart Products, Security & Blockchain, Healthcare, and Market Opportunities tracks.
For additional details see: https://nfc-forum.org/events/connections-summit/
Big Data LDN 2018: DATA MANAGEMENT AUTOMATION AND THE INFORMATION SUPPLY CHAI...Matt Stubbs
Date: 14th November 2018
Location: Governance and MDM Theatre
Time: 10:30 - 11:00
Speaker: Mike Ferguson
Organisation: IBS
About: For most organisations today, data complexity has increased rapidly. In the area of operations, we now have cloud and on-premises OLTP systems with customers, partners and suppliers accessing these applications via APIs and mobile apps. In the area of analytics, we now have data warehouse, data marts, big data Hadoop systems, NoSQL databases, streaming data platforms, cloud storage, cloud data warehouses, and IoT-generated data being created at the edge. Also, the number of data sources is exploding as companies ingest more and more external data such as weather and open government data. Silos have also appeared everywhere as business users are buying in self-service data preparation tools without consideration for how these tools integrate with what IT is using to integrate data. Yet new regulations are demanding that we do a better job of governing data, and business executives are demanding more agility to remain competitive in a digital economy. So how can companies remain agile, reduce cost and reduce the time-to-value when data complexity is on the up?
In this session, Mike will discuss how companies can create an information supply chain to manufacture business-ready data and analytics to reduce time to value and improve agility while also getting data under control.
UXDX Berlin - Test & Deploy, by Quentin Berder, President, WiredCraftUXDXConf
Quentin is the CEO of WiredCraft who has a developer and project manager background. He is responsible for managing the day-to-day operations of the Shanghai office.
At UXDX Berlin, Quentin took the stage to discuss test and deploy matters. Continuous delivery relies on tools but even more on culture. To be successful, you need to validate assumptions quickly and cheaply. Track everything, automate everything, continuous optimization and continuous delivery are essential.
Data2030 Summit MEA: Data Chaos to Data Culture March 2023Matt Turner
There is much more to becoming truly data driven and delivering the value of data investments. Overcoming the “Data Chaos” means making data accessible with data governance, creating a data culture, sharing knowledge through collaboration and data literacy to put data into action. This session will help enrich your data strategy and enable your organization to deliver data value.
Data2030 Summit Data Megatrends Turner Sept 2022.pptxMatt Turner
The next challenge in data is rapidly becoming clear: how can we scale data value and bring data driven decision making to everyone? We’ve made tremendous progress in bringing data together. The megatrends in data - data mesh, data fabric, modern data stack - are all about crossing the last mile to get data to everyone, not just the data experts. How can we empower everyone to better use data? Are the megatrends the road to actually scaling data value? And what does that mean for the data teams and data engineers creating systems and delivering dataops?
There is much more to becoming truly data driven. Overcoming the “Data Chaos” means democratizing knowledge through collaboration, promoting data literacy and building your data culture. The aim of this session is to help enrich your data strategy and enable your organization to make better use of your data assets.
Presentation at Data Innovation Summit 2021. Trusted, well managed data is key to AI and machine learning success. Data citizens need data insights and data scientists need to spend more time building models. Everyone wants to spend less time finding, discovering, and munging data and ensuring the data quality to deliver business results. However, traditional data approaches lock data away and slow AI implementation leaves much of this work on the data practitioner’s shoulders. This session will cover how AI is also helping solve these problems. New data tools that combine automation with human expertise are enabling data and knowledge sharing (including new data classes like IOT data), data democratization, and cloud migration. AI-driven data enablement ensures everyone can find the right data and make intelligent use of it. Join us for a lively discussion on the most critical resource for AI: your data.
Slide from my talk at Contech Forum 2021. This update from the November 2020 talk on digital equity work in the Bronx and lessons for Information providers in our changing world. This session will look at the progression of the Bronx Digital Equity Coalition and the development of principles for information and technology access that can also apply to information provider communities.
Securing the Right Metadata and Making it Work for YouMatt Turner
Metadata is a critical asset for the media and information industries. This session will talk about what metadata is, what you can do with it, where it is and how you can make it work for you. Presented at the Outsell Signature Event 2020 as part of the Master Class series.
Here are the resources in this talk:
Merriam Webster Metadata Definition
https://www.merriam-webster.com/dictionary/metadata
Emerging Trends in Metadata Management, Dataversity 2016
https://content.dataversity.net/DVMetadataRP_DownloadWP.html
Smart Content Kickoff March 2020
https://www.slideshare.net/barleyfish/m-turner-smart-content-march-2020
Wolters Kluwer Search That Talks Back
https://youtu.be/US0_zwa8kmI
BSI Medical Device Navigator
https://compliancenavigator.bsigroup.com/
Dodge Data: Big, Unstructured Data Management & Visualization Meets the Construction Industry
Isaac Sacolick, 2014
http://events.tvworldwide.com/Events/IIS-2014/VideoId/536/UseHtml5/True
Nature.com: AI peer reviewers unleashed to ease publishing grind
https://www.nature.com/articles/d41586-018-07245-9
Wired: With Deep Learning, Disney Sorts Through a Universe of Content
https://www.wired.com/wiredinsider/2019/12/deep-learning-disney-sorts-universe-content/
Pearson Efficacy Framework
https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/efficacy-and-research/methods/Efficacy-Workbook.pdf
Sven Fund We Need Integrated Publishing
https://onlinelibrary.wiley.com/doi/abs/10.1087/20130111
Cambridge Semantics What is Linked Data
http://www.cambridgesemantics.com/semantic-university/what-is-linked-data
Alan Morrison Semantics Keynote: https://www.slideshare.net/AlanMorrison/collapsing-the-it-stack-clearing-a-path-for-ai-adoption?from_action=save
Recording of Alan's talk (+18min) -> https://www.facebook.com/fhstp/videos/308669336596727/
Three Cool Things You Can Do with StandardsMatt Turner
Standards organizations deliver some of the world's most critical information to ensure interoperability and safety across every industry.
I gave this talk at the Standards Technology and Business Forum and covered what people are doing today and how standards organizations can
1) Better Deliver What Customers Want
2) Connect Standards to Their Customer's Data
3) Deliver Standards as Data
Mark logic Industrialize Your Data IOT Berlin Sept 2019Matt Turner
Data is a big part of the Industry 4.0 conversation but it’s not often a topic in its own right. IOT devices and sensors are creating more data than ever, digital twins need accurate data to impact operations, and the digital thread requires integrated and accessible data. These concepts are all key to industrial organizations being able to improve their products and services, better navigate increasingly complex business environments, and transform for the future. And they all need data to succeed.
But getting value from all that data isn’t easy. Many traditional data approaches fall far short of being able to manage the complexity and variability of today’s industrial data and, critically, being able to make that data securely and operationally available.
This talk will focus on how leading industrial organizations like Airbus, Eaton, Siemens, Chevron and Boeing are tackling these challenges head on with a new, data-centric approach called the Data Hub. These organizations are “industrializing their data” – investing in data as an asset that’s as essential as the people, processes and materials powering it. With the Data Hub, their projects are creating efficiency, improving quality and safety, and enabling workers today while building a foundation of data across their organizations.
Join this session to learn how you too can industrialize your data and hear about the leaders delivering on the vision of Industry 4.0!
In 2012 BBC helped London present the most digital Olympic Experience to date. The data platform behind the online experience helped connect audiences to athletes and events and drive record number of live and catch-up streams. This platform and the results remain highly relevant to anyone working to connect audiences with content and is a benchmark for the next Olympics coming up in Tokyo 2020
Smart Content Summit: Unlock the Value with the Right Data PatternMatt Turner
Smart Content as a strategy has been validated by the industry - collecting and managing all the data around the content is now a core activity to get the content to your fans and customers. Making it work is still hard and this talk looks at the successful projects from BBC, NBCU, ETC and Disney and examines the way they used NoSQL and semantics as well as the Operational Data Hub Pattern that puts this new technology into action ... and actually delivers Smart Content
As organizations grapple with ever increasing threats to security, the focus is shifting from just monitoring and protecting access. This approach protects the organization with a 'hard outer shell' or perimeter. With insider threats and complex, distributed work environments, media & entertainment organizations need to focus inside the shell and monitor access points to critical data within the organization and securing data. This session will review the latest security practices including cyber situational awareness and advances in data management that are enabling organizations to protect their data at the source while not crippling the critical role that access to data plays in the operations of today's entertainment organizations.
Media publishing meetup ocean of data july 2016Matt Turner
Slides from the New York Media/Publishing Meetup held on July 28th. The impact of the Ocean of data focusing on customer data and how to collect it an make an impact
Presented at Meatadata Madness in NYC March 2016. Metadata is more critical than ever and its impact is not just distribution but now extends across every area of the digital supply chain. Traditional methods of managing data with rows and columns create data that can't easily be shared and results in this critical data being in silos. Smart Content - a new approach using NoSQL and Semantics - enables this data to truly be shared across the supply chain including into production where valuable data is created but then lost in more organizations.
Metadata Madness: Semantics Takes Center StageMatt Turner
There is a big change happening right now in how we think about managing metadata and the impact it can have – including powering the top new app in the nation. Far from the afterthought of administrative tagging, metadata is now critical to the digital marketplaces and the effectiveness of digital products. Semantic metadata is a new approach that brings the flexibility necessary to capture the complete picture and to create and manage the new and ever-changing associations and relationships. This session will discuss the impact of semantics on metadata and demo it in action, revolutionizing what metadata can do for content.
New Trends in Data Management in the Information Industries Matt Turner
Presentation from the Copyright Clearance Center Distinguished Speaker Series presentation February 26th, 2015.
As the publishing industry is transforming from form based, single purpose products to information providers focused on the curation of data and content tailoring its delivery to the role, action and location of the users, there has been a parallel transformation in the management of the data and content that are the raw materials for these products.
Matt Turner, MarkLogic’s CTO for Media and Publishing, will talk about the new generation of information management technology focusing on how they are helping transform the information industries and revolutionize how people think about managing data and content.
Topic that will be covered include NoSQL / new generation databases, search, and semantic technology and information product trends with example of innovative teams leveraging these new capabilities.
Smart Content Summit - Unlocking Content With Semantics and MetadataMatt Turner
My presentation from the MESAlliance Smart Content Summit in LA on November 5th.
The conference was focused on making content smarter in every phase of the content lifecycle with a new twist: From inception to infinity - because we don't know what is coming down the line
My talk set the stage for some of these unexpected shifts and covered the role that traditional technology has played in perpetuating silos that make it hard to adjust.
And how new Technologies like NoSQL and semantics are making it possible to not only collect more information but to do it more efficiently.
Enjoy!
Kloptek Publishers Forum Keynote May 2014Matt Turner
Reinvention, Revolution and Revitalization: Real Life Tales from Publishing’s Front Lines
As the information provider and publishing industries maintain a constant state of change, leading organizations are developing unique innovation and product strategies. This session will explore these strategies, including:
(1) Innovation hubs: enabling new products while maintaining the core
(2) Data driven publishing: the complete picture of your users and markets
(3) Follow the content: where your information is used beyond the touch points of publishing and research
With examples from the front lines of publishers and information providers, this session will discuss how these strategies are allowing organizations to reinvent themselves in the continuing digital revolution and bringing new vitality to the ever changing role of publisher and information provider.
De Gruyter selected MarkLogic for their Next Generation publishing platform in 2010. Many of the world’s leading publishing houses are customers of MarkLogic, e.g. Elsevier, WILEY, Oxford University Press, Springer etc.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
10. Do domestic dogs interpret pointing as a command?
Animal Cognition (2012): 1-12 , November 09, 2012
By Scheider, Linda; Kaminski, Juliane; Call, Josep; Tomasello, Michael
DOGS GET Context!
Hi everyone,
I’m Matt Turner, CTO for Media & Manufacturing. That means I take care of all our customers that make the great things you all know and love
And I’ve very glad to be here today to talk about the impact of Linked Data and in particular how investing in this data can help transform your industry.
I’m going to start with something that seems obvious … the importance of data
But, before I get into this, I do want to remind everyone that going back even 5 years, this was not an obvious topic.
We were talking about apps and especially the mobile experience and the new wave of BI … but not about the data itself as a topic
But there were people having the conversation in their industry
We do a lot of work with publishers and one of the primary voices for change has been Dr. Sven Fund – then the CEO of DeGruyter a publisher over in Germany.
He wrote what I think of as a battle plan for the modern publisher called integrating publishing.
It’s a data driven approach to rethinking every part of the business around using data across every part of the business. From planning what content to invest in, to creating it and tailoring it to knowing how it impacts your customers, data can and should play a role … and Sven laid out the plan to get publishers to that point. This was quite a change for an industry still just thinking about content.
Shelly Palmer is another voice that was early with a message about data. He worked mostly within Media but his message was to every organization highlighting how the game has changed.
He says “Data Rich or Data Poor” that is the ONLY game. Every company is now competing on the battleground of data. Its not your revenue, your number of customers or their engagement. It’s the data you gather that actually matters. What’s more, you aren’t competing against what you think of as your competitors. Its Google, Apple, Facebook … and way above all of them Amazon.
Shelly says this to bring people’s attention to the importance of data.
And he’s not alone – he is joined by my colleague Michel de Ru. Michel works across a number of industries and at the MarkLogic 360 event last year he issued a call to arms:
Industrialize your data!
You invest in your processes, your machinery, your people and take care of your capital. And you need to do the same thing your data.
Think about how you manage it and, just like your machinery and other assets, industrialize how you deal with it
And they aren’t alone.
Who has heard this phrase Data is the new Oil?
Its everywhere … there is even someone saying it’s the not the new oil it the new nuclear. I guess because it keeps delivering value forever?
In fact there is so much about this, if you search for Data is the new oil infographic you get 13 million hits!
This is my favorite – see the data in the ground – just pump it out and – presto – you get your value!
Right? Its that easy, right?
And they aren’t alone.
Who has heard this phrase Data is the new Oil?
Its everywhere … there is even someone saying it’s the not the new oil it the new nuclear. I guess because it keeps delivering value forever?
In fact there is so much about this, if you search for Data is the new oil infographic you get 13 million hits!
This is my favorite – see the data in the ground – just pump it out and – presto – you get your value!
Right? Its that easy, right?
And on this topic, we are just starting to hear from the experts.
I hope Alan Morrison as one of these visionaries. He gave a keynote at the Semantic conference in August that was a real call to arms for everyone in THIS room to evangelize that you do need more than just data.
He was specifically talking about the vast gap between the vision of a unified IT stack and being able to leverage AI and the reality of the many silos of applications.
He specifically is looking at who is out there paying attention to this problem and he put up this slide – the top 10 companies in the world
And of them, fully 9 are doing more than just collecting data. They are investing in Linked Data – creating knowledge graphs and connecting their data to realize its value
He isn’t alone – Kurt Cagle makes a bold statement about the rise of Ontology will be a critical business advantage.
And then there is this paper about the state of AI. Alan also goes into this in his talk – AI without the meaning and connections in the data is just going to fall short.
Specifically they make this statement – that nearly every problem comes down to graphs of relationships among entities!
And on this topic, we are just starting to hear from the experts.
I hope Alan Morrison as one of these visionaries. He gave a keynote at the Semantic conference in August that was a real call to arms for everyone in THIS room to evangelize that you do need more than just data.
He was specifically talking about the vast gap between the vision of a unified IT stack and being able to leverage AI and the reality of the many silos of applications.
He specifically is looking at who is out there paying attention to this problem and he put up this slide – the top 10 companies in the world
And of them, fully 9 are doing more than just collecting data. They are investing in Linked Data – creating knowledge graphs and connecting their data to realize its value
He isn’t alone – Kurt Cagle makes a bold statement about the rise of Ontology will be a critical business advantage.
And then there is this paper about the state of AI. Alan also goes into this in his talk – AI without the meaning and connections in the data is just going to fall short.
Specifically they make this statement – that nearly every problem comes down to graphs of relationships among entities!
So what’s going on here?
Well to get to the heart of this, I want to ask you ONE question. Get ready because I’m going to ask you to raise your hand and take this very seriously
Who’s smarter, a dog or a chimpanzee?
---
OK – everybody has to vote
--
And it was a trick. Of course chimpanzees are much smarter. They can drive cars, talk in sign language … they are way way smarter ..
But … there is something a dog can do that a chimp just can’t do
And that is understand context.
If you hide a treat and then point to it, the chimp will totally ignore you and randomly guess. No matter what you do, the chimp just doesn’t care what you think.
But if you do the same thing with the dog, the dog will look at you to understand what is going on. In fact they will look at the right side of your face which is how we understand emotions and act on what they see.
This means you can point to the treat and they will go right to it. Puppies will do it. Dogs are so good at this, you can just move your eyes.
So what does this all mean for data?
Well, machines don’t get context!
They are the chimps of the technology world they can tell you it’s a link or a picture on that page … but they can’t tell you that they fit together. Or what its all about!
When you take this to the world of data, and in particular the data layer that can run your business
this is what you get – traditional data structures that just fall short
You have to define everything up front – all your data and everything your organization does …
And then categorize it. In no way will this work – you will end up stripping off context sometimes in layer. You can’t share this data across your organization and so you get what Alan was talking about in terms of the multiple layers of appliations and data
One of our customers talks about the result of all this changing of data as operating on opinion, not data!
We at MarkLogic started to address this with a document data model. This foundation of NoSQL means that you can much more flexibly store the data. You didn’t have to define it all up front and now you could adapt as things changed.
But it was still cumbersome to keep track of meaning and context … more elements could be very ticky
Enter Linked Data – this concept of describing the linkages in data – and specifically of using triples or RDF as an additional data mode, is a way to bring, with data, what the machine is lacking
You can now actually describe the concepts around the data. And what is more important – you can also describe the source, the provenance and even the usage of the data.
Combined, these two data models are key to creating that data layer and enabling you to actually make data the foundation of your business
This is critical – because there is a balance here. In the world of semantics and linked data, there are huge gains to be made in creating ontologies that match the real world and then linking data to those ontologies
But there is also a role for just a document – things that belong just to the document like dates, titles and of course the actual –these are perfectly OK in the world of XML. And also in this model the connecting triple is part of the document – also making it a graph that enables you to have data integrity.
At MarkLogic we’ve been using this combined model in an architectural pattern called an Operational Data Hub.
This pattern details (and I’m not going to go throught it all) taking in data, and then with many different approaches, curating that data and creating the context around it.
You end up with documents and triples that can provide that single view of the data that, as Alan and others talk about, can be the foundation for a data driven organization
Lets take a look at how these patterns are letting organizations leverage their data
Lets start in the complex regulatory compliance sector of financial services
ABN Amro wanted to take a new approach to compliance. Instead of building data sets for each issue and process they wanted to create an platform for regulatory compliance so they could respond to future needs
This required them to think about the problem differently – create data that then be used in what they called multiple compliance schemas.
They also realized this data would be a powerful asset across the compay.
They did this first to meet the tradestore regularion MiFID II and then created another hub for GDPR
This projects all used semantics to describe the entities and their relationships in the bank
But really be able to use the data, you need to understand the provenance of the data
To do this they did map all the entities
But the also added Prov-O to the picture to record that data lineage.
Now they know not just about customers and their interactions inside the bank and on their own
They also know where that data came from
This lets them see a much richer set of data – tieing together internal and external systems
With this data they can now delvier the much more complete view of the customer required by GDPR, describe the entities correctly AND know the source and all the details about that data
Keep in mind this also follows the document and semantics route – so that profile is also in the system enabling them to see the details of the customer
This platform delivers that vision of universal enterprise data – or the semantic data layer as Alan talks about it.
Focused on compliance, it is an engine for handling compliance data rather than just a single solution
But it is now also a view of data across the organization that is a considerable asset to the bank
In addition to improving your own organization, you can also use this approach to improve the customer experience.
Mitchell1 is in the car repair business. They provide the tools to garages to help the shops and the mechanics make the repair experience better.
They stared with the acutal information on how to make the repair. This information went from paper to digital and is exploding in complexity as cars get more complex.
But they also provide the systems that run the shop – scheduling, diagnostics, parts ordering, the actual repairs and billing.
One cool thing is that they also get tips from the mechanics about how to fix the car
To put this data into action, they created a data hub that linked all this data
And the link is this – every part on every car that is sold going back 30 years or more!
They invested in ontology development to create this foundational data – how parts, compnents and systems fit together
And then spend 2+ years creating the data set and linking the data.
Using this, they are trying to make the mechanics job better – for instance knowing what is wrong almost as soon as they see the car.
This is a composite screen that shows the parts that you will probably have to order from a set of error codes.
This saves everyone time getting to answers
And as cars get connected, this is starting to happen outside of the shop - cars sending codes and getting ready for repair and maybe even making appointments
You can only do this if all the data is connected
And they are also giving information about the long term – for instance what is likely to break on my car.
This used to be in the heads of experts … but now it done with data –
Don’t forget this is your car, with your exact specification
This data hub lets Mitchell1 make car repair a much better experience
And finally lets talk about using data to make a difference
Sensing clues is an organization here in the Netherlands dedicated to conversation and specifically protecting endangered species
They are focused on the interactions between people and these animals
This is both when people go into their environment -> and this is often very bad for instance poachers
But also protecting the animals when they go into human environments
To help with this mission they have collected a lot of data. Incident reports, field notes, signal data and even pictures and videos
And they have integrated all this data so they can look at an area.
Part of the project is to get information to the right people so this lets them send updates and warnings to rangers and other workers when they are actually out in the field – or actually going out since there isn’t a lot of connectivity
This also allows them to undersnad what is happening. And to do this they created a semantic layer that created concepts and linkd the data
This helps them gain insights into what is happening – this is a zoom into the data for a specific area
And it shows the different types of interactions – the professional incidents, likely poachers, and then where arrows are used which is a very different case
They can then go back to the map and be prepared for what the situation is and also use this information for different programs to prevent both types of incidents
And this also helps them understand other incidents. For instance why were warthogs showing up? Does this mean there are poachers?
Well actually because of the context of the data, they can tell that these are not usually poachers – they are people coming in to the area and camping out for 6-7 days and making charcoal. Not as high a priority and, some good news in the data
We volunterred our efforts and as they continue to develop ther data layer to make an impact on animals I’m sure they could use more help
One more factor in creating this type of semantic data layer is that in addition to using the data for your internal uses, you can share it.
This is what Springer does with their rich database of scientific information … they have the actual products SprinterLink and then they just take the UI off and offer APIs for text mining and data access.
So lets take a look back – you can impact your organization, help your customers and even help the world if you have your data together
So lets go back to that infographic.
Maybe it is actually just so easy – if you add a few things.
First consider the context of the data where you find it. And capture all of that
Then lets think about the usage and the different contexts everyone will need when the access it!
Using this as a guide, lets think about putting a data hub right here in the middle. This won’t be the only source of data … and even for a single ‘well’ you need a place to collect the data and make it universally accessible.
I think if you do this – if you apply the principles of linked data to your operational data … well maybe you can bring some of those doggie smarts to your organization.
Thank you!