Linklaters is one of the world’s leading global law firms. The firm has a wealth of high value information held within our systems however due to the nature of these systems it is not always easy to leverage this value. Our goal was to improve decision making across the firm by transforming access to and ability to query data. To do this we wanted a solution that would combine our information, was easy to extend in an iterative fashion and would leverage our existing investment in business intelligence. To achieve this we chose to create a graph based warehouse using Linked Data. Data from our SAP Business Warehouse was combined with flat file and XML feeds from our systems of record and transformed into RDF via ETL services that loaded it into a triple store. To provide simple integration with our existing environment a SPARQL to OData service was deployed creating an OData compliant endpoint. Finally a model driven, mobile friendly, user interface was created allowing users to query, review results and explore the underlying graph. This talk will describe the approach we took and the lessons learnt.
Ethics & (Explainable) AI – Semantic AI & the Role of the Knowledge ScientistStratos Kontopoulos
Presentation for the NexTech Experts Panel II during the NexTech 2021 Congress (https://www.iaria.org/conferences2021/NexTech21.html).
Discusses the emerging and versatile role of the Knowledge Scientist in designing and developing explainable SemanticAI applications.
Building trust and accountability - the role User Experience design can play ...Pistoia Alliance
In this webinar our panel of UX specialists give a brief introduction to User Experience before presenting the design opportunities UX can bring to AI. We all know that AI has great potential but has some significant hurdles to overcome not least so the human aspect of trust and ethical considerations when designing in the life sciences.
Enterprise Data Governance: Leveraging Knowledge Graph & AI in support of a d...Connected Data World
As one of the largest financial institutions worldwide, JP Morgan is reliant on data to drive its day-to-day operations, against an ever evolving regulatory regime. Our global data landscape possesses particular challenges of effectively maintaining data governance and metadata management.
The Data strategy at JP Morgan intends to:
a) generate business value
b) adhere to regulatory & compliance requirements
c) reduce barriers to access
d) democratize access to data
In this talk, we show how JP Morgan leverages semantic technologies to drive the implementation of our data strategy. We demonstrate how we exploit knowledge graph capabilities to answer:
1) What Data do I need?
2) What Data do we have?
3) Where does my Data come from?
4) Where should my Data come from?
5) What Data should be shared most?
Triplestores and inference, applications in Finance, text-mining. Projects and solutions for financial media and publishers.
Keystone Industrial Panel, ISWC 2014, Riva del Garda, 18 Oct 2014.
Thanks to Atanas Kiryakov for this presentation, I just cut it to size.
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
Ethics & (Explainable) AI – Semantic AI & the Role of the Knowledge ScientistStratos Kontopoulos
Presentation for the NexTech Experts Panel II during the NexTech 2021 Congress (https://www.iaria.org/conferences2021/NexTech21.html).
Discusses the emerging and versatile role of the Knowledge Scientist in designing and developing explainable SemanticAI applications.
Building trust and accountability - the role User Experience design can play ...Pistoia Alliance
In this webinar our panel of UX specialists give a brief introduction to User Experience before presenting the design opportunities UX can bring to AI. We all know that AI has great potential but has some significant hurdles to overcome not least so the human aspect of trust and ethical considerations when designing in the life sciences.
Enterprise Data Governance: Leveraging Knowledge Graph & AI in support of a d...Connected Data World
As one of the largest financial institutions worldwide, JP Morgan is reliant on data to drive its day-to-day operations, against an ever evolving regulatory regime. Our global data landscape possesses particular challenges of effectively maintaining data governance and metadata management.
The Data strategy at JP Morgan intends to:
a) generate business value
b) adhere to regulatory & compliance requirements
c) reduce barriers to access
d) democratize access to data
In this talk, we show how JP Morgan leverages semantic technologies to drive the implementation of our data strategy. We demonstrate how we exploit knowledge graph capabilities to answer:
1) What Data do I need?
2) What Data do we have?
3) Where does my Data come from?
4) Where should my Data come from?
5) What Data should be shared most?
Triplestores and inference, applications in Finance, text-mining. Projects and solutions for financial media and publishers.
Keystone Industrial Panel, ISWC 2014, Riva del Garda, 18 Oct 2014.
Thanks to Atanas Kiryakov for this presentation, I just cut it to size.
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
AI-SDV 2021: Francisco Webber - Efficiency is the New PrecisionDr. Haxel Consult
The global data sphere, consisting of machine data and human data, is growing exponentially reaching the order of zettabytes. In comparison, the processing power of computers has been stagnating for many years. Artificial Intelligence – a newer variant of Machine Learning – bypasses the need to understand a system when modelling it; however, this convenience comes with extremely high energy consumption.
The complexity of language makes statistical Natural Language Understanding (NLU) models particularly energy hungry. Since most of the zettabyte data sphere consists of human data, such as texts or social networks, we face four major obstacles:
1. Findability of Information – when truth is hard to find, fake news rule
2. Von Neumann Gap – when processors cannot process faster, then we need more of them (energy)
3. Stuck in the Average – when statistical models generate a bias toward the majority, innovation has a hard time
4. Privacy – if user profiles are created “passively” on the server side instead of “actively” on the client side, we lose control
The current approach to overcoming these limitations is to use larger and larger data sets on more and more processing nodes for training. AI algorithms should be optimized for efficiency rather than precision. In this case, statistical modelling should be disqualified as a brute force approach for language applications. When replacing statistical modelling and arithmetic, set theory and geometry seem to be a much better choice as it allows the direct processing of words instead of their occurrence counts, which is exactly what the human brain does with language – using only 7 Watts!
We have critically evaluated how AI will shape integration use cases, their feasibility, and timelines. Emerging Technology Analysis Canvas (ETAC), a framework built to analyze emerging technologies, is the methodology of our study.
We observe that AI can significantly impact integration use cases and identify 13 AI-based use case classes for integration. Points to note include:
Enabling AI in an enterprise involves collecting, cleaning up, and creating a single representation of data as well as enforcing decisions and exposing data outside, each of which leads to many integration use cases. Hence, AI indirectly creates demand for integration.
AI needs data, which in some cases lead to significant competitive advantages. The need to collect data would drive vendors to offer most AI products in the cloud through APIs.
Due to lack of expertise and data, custom AI model building will be limited to large organizations. It is hard for small and medium size organization to build and maintain custom models.
"Big data" is a broad term that encompasses a wide range of data and contents. Big data offers new approaches to analysis and decision making. At first glance big data and IP may seem to be opposites, but have more in common than one may think. This talk will focus on how big data will impact, and be impacted, by IP. One of the biggest promises in big data is the possibility to re-use data produced via different sources, create new services or predict the future, via the analysis of correlations. In this context, how can companies protect information assets and analytical skills? What are the new skills required to search and analyze in real time a big amount of datasets ? Big data will change not only patents information, but will also generate new types of patents.
Bringing Machine Learning and Knowledge Graphs Together
Six Core Aspects of Semantic AI:
- Hybrid Approach
- Data Quality
- Data as a Service
- Structured Data Meets Text
- No Black-box
- Towards Self-optimizing Machines
Overview of structured search technology. Using the structure of a document to create better search results for document search and retrieval.
How both search precision and recall is improved when the structure of a document is used.
How a keyword match in a title of a document can be used to boost the search score.
Case studies with the eXist native XML database.
Steps to set up a pilot project.
The Web of Linked Open Data, or LOD, is the most relevant achievement of the Semantic Web. Initially proposed by Tim Berners-Lee in a seminal paper published in Scientific American in 2001, the Semantic Web envisions a web where software agents can interact with large volumes of structured, easy to process data. It is now when users have at our disposal the first, mature results of this vision. Among them, and probably the most significant ones, are the different LOD initiatives and projects that publish open data in standard formats like RDF.
This presentation provides an overview and comparison of different LOD initiatives in the area of patent information, and analyses potential opportunities for building new information services based on largely available datasets of patent information. Information is based on different interviews conducted with innovation agents and on the analysis of professional bibliography and current implementations.
LOD opportunities are not only restricted to information aggregators, but also to end-users and innovation agents that need to face with the difficulties of dealing with large amounts of data. In both cases, the opportunities offered by LOD need to be assessed, as LOD has just become a standard, universal method to distribute, share and access data.
AI is Not Magic: It’s Time to Demystify and Apply Srinivasan Parthiban (VINGY...Dr. Haxel Consult
The term Artificial Intelligence was first coined in 1956 and since then the technology has progressed, disappointed, and re-emerged. Now the prediction is that AI will add $16 trillion to the global economy by 2030. AI is becoming as fundamental as electricity, the internet, and mobile as they were born into the mainstream. Not having an AI strategy in 2020 will be like not having a mobile strategy in 2010, or an Internet strategy in 2000. As a result of technology advancements, AI-related patent applications have surged over the recent years. The patent searchers, information professionals and bioinformatics researchers who have been involved in collecting data, organizing the data and analysing the data are starting to move up in the ladder with AI. Of course, AI can help you, your business, your employees, and your customers, but you need a prescriptive approach to harness its power and put AI to work. This presentation will take a glimpse under the hood of AI and look into some recent trends in data and analytics that are relevant to the information professionals.
A primer on Blockchain, Semantic Web and Ricardian Contracts.
Semantic Blockchain is a proposal where the Semantic Web meets the Blockchain. Combining these two technologies could provide the Semantic web with a transparent proof of work and trust mechanism while conversely disambiguating data stored on the blockchain, solving one of the key challenges with Riccardian/Smart contracts. This presentation will explore how these two technologies might be combine using the example of a smart contract. However the potential application is much bigger and could provide a key back bone underlying the Internet of Things.
What AI is and examples of how it is used in legalBen Gardner
This presentation was given at Legal Geek on 10th Dec 2015. It is a scenesetting peice that looks to de-mystify artificial intelligence by looking beyond the hype.
AI-SDV 2021: Francisco Webber - Efficiency is the New PrecisionDr. Haxel Consult
The global data sphere, consisting of machine data and human data, is growing exponentially reaching the order of zettabytes. In comparison, the processing power of computers has been stagnating for many years. Artificial Intelligence – a newer variant of Machine Learning – bypasses the need to understand a system when modelling it; however, this convenience comes with extremely high energy consumption.
The complexity of language makes statistical Natural Language Understanding (NLU) models particularly energy hungry. Since most of the zettabyte data sphere consists of human data, such as texts or social networks, we face four major obstacles:
1. Findability of Information – when truth is hard to find, fake news rule
2. Von Neumann Gap – when processors cannot process faster, then we need more of them (energy)
3. Stuck in the Average – when statistical models generate a bias toward the majority, innovation has a hard time
4. Privacy – if user profiles are created “passively” on the server side instead of “actively” on the client side, we lose control
The current approach to overcoming these limitations is to use larger and larger data sets on more and more processing nodes for training. AI algorithms should be optimized for efficiency rather than precision. In this case, statistical modelling should be disqualified as a brute force approach for language applications. When replacing statistical modelling and arithmetic, set theory and geometry seem to be a much better choice as it allows the direct processing of words instead of their occurrence counts, which is exactly what the human brain does with language – using only 7 Watts!
We have critically evaluated how AI will shape integration use cases, their feasibility, and timelines. Emerging Technology Analysis Canvas (ETAC), a framework built to analyze emerging technologies, is the methodology of our study.
We observe that AI can significantly impact integration use cases and identify 13 AI-based use case classes for integration. Points to note include:
Enabling AI in an enterprise involves collecting, cleaning up, and creating a single representation of data as well as enforcing decisions and exposing data outside, each of which leads to many integration use cases. Hence, AI indirectly creates demand for integration.
AI needs data, which in some cases lead to significant competitive advantages. The need to collect data would drive vendors to offer most AI products in the cloud through APIs.
Due to lack of expertise and data, custom AI model building will be limited to large organizations. It is hard for small and medium size organization to build and maintain custom models.
"Big data" is a broad term that encompasses a wide range of data and contents. Big data offers new approaches to analysis and decision making. At first glance big data and IP may seem to be opposites, but have more in common than one may think. This talk will focus on how big data will impact, and be impacted, by IP. One of the biggest promises in big data is the possibility to re-use data produced via different sources, create new services or predict the future, via the analysis of correlations. In this context, how can companies protect information assets and analytical skills? What are the new skills required to search and analyze in real time a big amount of datasets ? Big data will change not only patents information, but will also generate new types of patents.
Bringing Machine Learning and Knowledge Graphs Together
Six Core Aspects of Semantic AI:
- Hybrid Approach
- Data Quality
- Data as a Service
- Structured Data Meets Text
- No Black-box
- Towards Self-optimizing Machines
Overview of structured search technology. Using the structure of a document to create better search results for document search and retrieval.
How both search precision and recall is improved when the structure of a document is used.
How a keyword match in a title of a document can be used to boost the search score.
Case studies with the eXist native XML database.
Steps to set up a pilot project.
The Web of Linked Open Data, or LOD, is the most relevant achievement of the Semantic Web. Initially proposed by Tim Berners-Lee in a seminal paper published in Scientific American in 2001, the Semantic Web envisions a web where software agents can interact with large volumes of structured, easy to process data. It is now when users have at our disposal the first, mature results of this vision. Among them, and probably the most significant ones, are the different LOD initiatives and projects that publish open data in standard formats like RDF.
This presentation provides an overview and comparison of different LOD initiatives in the area of patent information, and analyses potential opportunities for building new information services based on largely available datasets of patent information. Information is based on different interviews conducted with innovation agents and on the analysis of professional bibliography and current implementations.
LOD opportunities are not only restricted to information aggregators, but also to end-users and innovation agents that need to face with the difficulties of dealing with large amounts of data. In both cases, the opportunities offered by LOD need to be assessed, as LOD has just become a standard, universal method to distribute, share and access data.
AI is Not Magic: It’s Time to Demystify and Apply Srinivasan Parthiban (VINGY...Dr. Haxel Consult
The term Artificial Intelligence was first coined in 1956 and since then the technology has progressed, disappointed, and re-emerged. Now the prediction is that AI will add $16 trillion to the global economy by 2030. AI is becoming as fundamental as electricity, the internet, and mobile as they were born into the mainstream. Not having an AI strategy in 2020 will be like not having a mobile strategy in 2010, or an Internet strategy in 2000. As a result of technology advancements, AI-related patent applications have surged over the recent years. The patent searchers, information professionals and bioinformatics researchers who have been involved in collecting data, organizing the data and analysing the data are starting to move up in the ladder with AI. Of course, AI can help you, your business, your employees, and your customers, but you need a prescriptive approach to harness its power and put AI to work. This presentation will take a glimpse under the hood of AI and look into some recent trends in data and analytics that are relevant to the information professionals.
A primer on Blockchain, Semantic Web and Ricardian Contracts.
Semantic Blockchain is a proposal where the Semantic Web meets the Blockchain. Combining these two technologies could provide the Semantic web with a transparent proof of work and trust mechanism while conversely disambiguating data stored on the blockchain, solving one of the key challenges with Riccardian/Smart contracts. This presentation will explore how these two technologies might be combine using the example of a smart contract. However the potential application is much bigger and could provide a key back bone underlying the Internet of Things.
What AI is and examples of how it is used in legalBen Gardner
This presentation was given at Legal Geek on 10th Dec 2015. It is a scenesetting peice that looks to de-mystify artificial intelligence by looking beyond the hype.
Strategies for integrating semantic and blockchain technologiesHéctor Ugarte
Semantic Blockchain is the use of Semantic web standards on blockchain based systems. The standards promote common data formats and exchange protocols on the blockchain, making used of the Resource Description Framework (RDF).
Ontology BLONDiE for Bitcoin and Ethereum.
Research how to extract data from Ethereum.
Research how to store RDF data on Ethereum.
Prototype DeSCA: Ethereum application.
Consuming Data From Many Platforms: The Benefits of OData - St. Louis Day of ...Eric D. Boyd
The amount of data stored today is growing at a rapid rate. However, data is only valuable if it is accessible and can be consumed by people and systems. OData is an open protocol for sharing data that is positioned to solve this problem. OData uses the standard HTTP protocol using REST principles to make data accessible and has huge industry momentum with rapid adoption growth. In this session, we will explore what OData is all about and how to expose relational and non-relational data as OData using WCF Data Services. We will then walkthrough developing apps to consume the OData feeds from multiple clients including mobile devices. Finally, we will take a look at how you can benefit from using Azure to publish your data with OData services.
You've created web sites and spruced them up with jQuery to improve your user experience. You've played around with WCF Data Services to create lists of data from your server. But what happens when you bring the two of them together. It's like peanut butter and jelly; peas and carrots; well, you get the idea. This talk will describe how to connect your jQuery-based web application with your OData data service. If time permits, we'll also look at binding your OData feed to interesting jQuery plug-ins like jqGrid.
The latest presentation from empathic.Marketing by Andika Dewanto
Need further information:
contact us: hello@empathic.marketing
Website: http://empathic.marketing
Strata 2015 presentation from Oracle for Big Data - we are announcing several new big data products including GoldenGate for Big Data, Big Data Discovery, Oracle Big Data SQL and Oracle NoSQL
Introduction to Advanced Analytics with SharePoint CompositesMark Tabladillo
SharePoint 2013 provides a new feature called Composites, intended to provide what Microsoft calls a “do-it-yourself” framework for creating a business solution. These solutions can include web pages, web parts, and data sources. This session will demonstrate these features matched against the advanced data science technology available in SQL Server 2014. The specific technology includes Excel Services and Power BI.
Denodo DataFest 2016: Comparing and Contrasting Data Virtualization With Data...Denodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/Bvmvc9
Data prep and data blending are terms that have come to prominence over the last year or two. On the surface, they appear to offer functionality similar to data virtualization…but there are important differences!
In this session, you will learn:
• How data virtualization complements or contrasts technologies such as data prep and data blending
• Pros and cons of functionality provided by data prep, data catalog and data blending tools
• When and how to use these different technologies to be most effective
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
Turn Data into Business Value – Starting with Data Analytics on Oracle Cloud ...Lucas Jellema
Data Science, Business Intelligence, Data Lake, Machine Learning and AI. Diverse terminology with a common goal: leverage data to realize business value. Through consolidated insight and automated processing, predictions, recommendations and actions. Using visualizations, dashboards, reports, alerts, machine learning models. Based on data. Data retrieved from raw sources into a data lake, wrangled into cleansed, enriched, anonymized and aggregated data sets and turned into business intelligence or used for training machine learning models, that in turn power Smart Applications. This session walks the audience through the start to end data flow on Oracle Autonomous Data Warehouse, Analytics Cloud, Big Data Cloud & Data Integration Platform.
Managing Large Amounts of Data with SalesforceSense Corp
Critical "design skew" problems and solutions - Engaging Big Objects, MuleSoft, Snowflake and Tableau at the right time
Salesforce’s ability to handle large workloads and participate in high-consumption, mobile-application-powering technologies continues to evolve. Pub/sub-models and the investment in adjacent properties like Snowflake, Kafka, and MuleSoft, has broadened the development scope of Salesforce. Solutions now range from internal and in-platform applications to fueling world-scale mobile applications and integrations. Unfortunately, guidance on the extended capabilities is not well understood or documented. Knowing when to move your solution to a higher-order is an important Architect skill.
In this webinar, Paul McCollum, UXMC and Technical Architect at Sense Corp, will present an overview of data and architecture considerations. You’ll learn to identify reasons and guidelines for updating your solutions to larger-scale, modern reference infrastructures, and when to introduce products like Big Objects, Kafka, MuleSoft, and Snowflake.
Modern Data Management for Federal ModernizationDenodo
Watch full webinar here: https://bit.ly/2QaVfE7
Faster, more agile data management is at the heart of government modernization. However, Traditional data delivery systems are limited in realizing a modernized and future-proof data architecture.
This webinar will address how data virtualization can modernize existing systems and enable new data strategies. Join this session to learn how government agencies can use data virtualization to:
- Enable governed, inter-agency data sharing
- Simplify data acquisition, search and tagging
- Streamline data delivery for transition to cloud, data science initiatives, and more
My Slidedeck about Common Data Service and Model. This technology is under development so content is subject to change and based on current service on 4/13/2018
An Introduction to Data Virtualization in 2018Denodo
Watch full webinar on demand here: https://goo.gl/Rdrc1w
"Through 2020, 50% of enterprises will implement some form of data virtualization as one enterprise production option for data integration" according to Gartner. It is clear that data virtualization has become a driving force for companies to implement an agile, real-time and flexible enterprise data architecture.
Attend this session to learn:
• What data virtualization actually means and how it differs from traditional data integration approaches
• The all important use cases and key patterns of data virtualization
• What to expect in the upcoming sessions in the Packed Lunch Webinar Series, which will take a deeper dive into various challenges solved by data virtualization in big data analytics, cloud migration and various other scenarios
Agenda:
• Introduction & benefits of DV
• Summary & next steps
• Q&A
Augmentation, Collaboration, Governance: Defining the Future of Self-Service BIDenodo
Watch full webinar here: https://bit.ly/3zVJRRf
According to Dresner Advisory’s 2020 Self-Service Business Intelligence Market Study, 62% of the responding organizations say self-service BI is critical for their business. If we look deeper into the need for today’s self-service BI, it’s beyond some Executives and Business Users being enabled by IT for self-service dashboarding or report generation. Predictive analytics, self-service data preparation, collaborative data exploration are all different facets of new generation self-service BI. While democratization of data for self-service BI holds many benefits, strict data governance becomes increasingly important alongside.
In this session we will discuss:
- The latest trends and scopes of self-service BI
- The role of logical data fabric in self-service BI
- How Denodo enables self-service BI for a wide range of users - Customer case study on self-service BI
Using OBIEE and Data Vault to Virtualize Your BI Environment: An Agile ApproachKent Graziano
First we interview the users, then we design a reporting model based on those interviews. We follow that up with mounds of ETL development to load the new model, basically keeping the user community in the dark during all that development. Does this sound familiar?
This presentation will demonstrate an alternative approach using the Data Vault Data Modeling technique to build a flexible, easily-extensible “Foundation” layer in our data warehouse with an Agile, iterative methodology. Relying on the Business Model and Mapping (BMM) functionality of OBIEE, we can rapidly virtualize a dimensional reporting model using the pattern-based Data Vault Foundation layer to decrease the time, and money, it takes to get BI content in front of end users. Attendees will see a sample Data Vault model designed iteratively and deployed to the semantic model of OBIEE.
How to Empower Your Business Users with Oracle Data VisualizationPerficient, Inc.
With Oracle Data Visualization Cloud Service, your business users can perform self-service analytics, spot patterns, trends, correlations, and construct visual data stories for greater insight into how your product, service, or organization is performing.
In this webinar, we demonstrated how easily users can explore their data in new and different ways through stunning visualizations automatically, promoting self-service discovery.
Discussion included:
-In-depth review of Oracle Data Visualization Cloud Service
-Connecting different data sets like HCM, ERP, Sales Cloud and more
-Mobile and security
-Demo taking a real-world business use case from end to end
Big Data Expo 2015 - Barnsten Why Data Modelling is EssentialBigDataExpo
Learn the tips and tricks how to handle Data Modeling in your Big Data environment. Mark will show how modeling will add value to the business and how to make your Big Data landscape transparent across the organization.
You will see the latest modeling techniques for Big Data and different types of modeling notations. Also you will learn how to integrate Data Modeling into your BI environment.
Next Gen Analytics Going Beyond Data WarehouseDenodo
Watch this Fast Data Strategy session with speakers: Maria Thonn, Enterprise BI Development Manager, T-Mobile & Jonathan Wisgerhof, Smart Data Architect, Kadenza: https://goo.gl/J1qiLj
Your company, like most of your peers, is undoubtedly data-aware and data-driven. However, unless you embrace a modern architecture like data virtualization to deliver actionable insights from your enterprise data, the worth of your enterprise data will diminish to a fraction of its potential.
Attend this session to learn how data virtualization:
• Provides a common semantic layer for business intelligence (BI) and analytical applications
• Enables a more agile, flexible logical data warehouse
• Acts as a single virtual catalog for all enterprise data sources including data lakes
At Data-centric Architecture Forum 2020 Thomas Cook, our Sales Director of AnzoGraph DB, gave his presentation "Knowledge Graph for Machine Learning and Data Science". These are his slides.
Similar to Delivering a Linked Data warehouse and realising the power of graphs (20)
FAIR Data-centric Information Architecture.pptxBen Gardner
FAIR Data (Findable, Accessible, Interoperable, Re-usable) is seen as a route to releasing value from our existing data in AstraZeneca as well as setting us up to be able to do so more easily with new data we generate from here on. As we look into the dimensions of FAIR data, Findability can be addressed by indexing and cataloguing our data, accessibility by a combination of information classification, automation and manual processes (including understanding informed consent from patients/participants) and re-usability can be supported by provisioning processes into approved analytical environments. These are all significant challenges, with significant opportunities offered through optimisation and standardisation of supporting processes, but the biggest challenge of all is interoperability. Interoperability requires us to know whether two datasets of the same data type can be pooled for analytical purposes and how we can join together datasets of different types to answer complex questions. In this talk, I will show how AZ R&D is approaching the challenges of Interoperability to enhance the re-use of our data.
When we think about search there are essentially two activities we wish to perform 1) Search to find a known thing and 2) Search to explore/research around a thing. When we search to find returning hits in a list format works well. However when we want to explore a list is a poor way to visualise the returned hits. This presentation looks at how semantics can be used to develop solutions that allow people to explore information space rather than just search it.
This is an intriduction to how Semantic/Linked Data technologies can help solve the challenges of information overload. This presentation is often given in co-junction with 'meet Jessica - Making Connections Matter.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
3. Accessing the right information is challenging
Diverse Range of Specialisations
Information Seeking Behaviour
Information is Silo’ed
Information Hierarchy
5. Building a Linked Data Warehouse demo
Excel Reports
XML File
RDF
Management
Triple
Store
Model
UI
S O
ETL Platform
OData
+
OData4Sparql
Sparql
+
Linked Data Warehouse Data Access Exploration
6. Linked Data and Model
• Traditional approaches try to identify how the data is to be “captured”
upfront.
• You can do this with the linked data model
• But we don’t…..Why?
• Always leads to “Paralysis by Analysis”
• You will miss so much.
• And take a huge amount of time doing it.
• You will find that there is a huge amount of
information and relationships you never would
of thought if starting from the model.
• Then there are tricks you can do to add huge
value
• The data model evolves very rapidly from the
data and can be further tweaked at anytime.
Let the data express itself
• Source by source, row by row let the data tell
you what it is describing.
• What it is, what relationships and metadata it
has.
• You’ll find a lot more information that you
simply couldn’t describe in a RDMS
• Another source can add to an existing item
without you even having to think
9. ETL & Linked Data Creation & Management
In4mium Talend modules
• Semantic modules ready to use through
configuration in Talend
• No API knowledge required by users
• Range of modules (over 60 ) for all
aspects of linked data creation and
management
• Create fully semantic apps
• Or pick and mix with traditional
aspects
• Works seamlessly with existing Talend
environment and modules
• Model driven behaviours are now
possible
• Easily add sematic technologies into
existing service architectures
• All the benefits without the hassle
10. OData4Sparql – Simplifying integration
+
• Brings together the strength of a ubiquitous RESTful
interface standard (OData) with the flexibility, federation
ability of RDF/SPARQL.
• SPARQL/OData Interop proposed W3C interoperation proxy
between OData and SPARQL (Kal Ahmed, 2013)
• Opens up many popular user-interface development
frameworks and tools such as Kendo UI, SAPUI5, etc.
• Acts as a Janus-point between application development and
data-sources.
• User interface developers are not, and do not want to be,
database developers. Therefore they want to use a
standardized interface that abstracts away the database,
even to the extent of what type of database: RDBMS,
NoSQL, or RDF/SPARQL
• By providing an OData4SPARQL server, it opens up any
SPARQL data-source to the C#/LINQ development world.
• Opens up many productivity tools such as
Excel/PowerQuery, and SharePoint to be consumers of
SPARQL data such as Dbpedia, Chembl, Chebi, BioPax
and any of the Linked Open Data endpoints!
• Microsoft has been joined by IBM and SAP using OData as
their primary interface method which means there will many
application developers familiar with OData as the means to
communicate with a backend data source.
11. Model Driven UI
Linklaters Data Model Northwind Data Model
Things
Sample Query Sample Query
Relationships
between
Things
Things
Relationships
between
Things
13. Strings to Things to Facts
Click on a ‘thing’
displays a ‘Lens’
about that ‘thing’
that shows different
fragments that
displays facts about
the thing
The ‘About’
fragment shows
most relevant
information.
Compare with the
Google
knowledge graph
The ‘Person
Involved’
fragment list all
persons involved
with the matter
The ‘Financial
Summary’
calculates a
financial
summary
… and we can find
associated deal
‘things’. If we want
more details about
any ‘thing’ we can
now navigate to its
‘lens’
14. Lens Discovery
Navigating through
‘Gerald Grant’, the
managing partner
for the Matter, takes
us to his Lens
Navigating through
the associated deal
takes us to that
deal’s Lens
Or show the Lens
on the client of the
matter
One is not limited to
facts within the
application. In the
case of a client we
can navigate to their
Companies House
page (or it could
have been D&B,
LinkDocs etc)
15. Composing Questions
Advanced Searches can
be selected from the list
which then displays a
query in a different format
that allows better control
over the search
Advanced Searches can
be selected from the list
which then displays a
query in a different format
that allows better control
over the search
The advanced search
allows conditions to be
added that link to other
‘things’ or limit the values
of ‘facts’ about the
associated ‘thing’. This
allows much more precise
searches to be executed
16. OData integration with Excel Power Query/Pivot
OData
OData4Sparql
Power Query Data Grabber/Shaper
• Build queries and utilise expand to traverse graph
• Limited data transformation can be incorporated into
the queries
• Create multiple views
Power Pivot Self Service BI
• Integrate across Power Queries and
other sources to build ROLAP models
• Explore model with Pivot tables
Power
View
Power
Map
Pivots, Charts
& Grids
Tableau,
etc.
Power Query
Power Pivot
18. Linked Data has delivered
• Elimination of silos through creation of logical
data warehouse that is extensible across internal
and external data sources
• Enabled “find and explore” information seeking
behaviours
• Separation of data modelling from integration
provides for easy addition of internal & external
data
• Ability to support diverse range of specialised
domain views onto data
• Introduces a Service Orientated Data
Architecture simplifying application
development
• Based on W3C web standards providing future
proofing and protection of firms IP (data
models)
19. Building a Linked Data Warehouse pilot
RDF
Management
Triple
Store
Model
UI
S O
ETL Platform
OData
+
OData4Sparql
Sparql
+
Matter
Time
People
Financials
Deal
Finder
Client
Book
Client
Engage
K_Docs
SAP
One FTE (2x0.5) and nine months delivered
• Integrated 3 years and 9 months of data from 9 sources
• 24 million triples
• 62 Things (People, Projects, Clients, etc.)
• 127 Relationships between Things
• 223 Data attributes
In this picture we show just two In4mium modules being used alongside standard Talend modules.
This workflow is showing filters, transformations and lookup joins before the data is converted to RDF.
It is the Rdfiser that converts the standard data on the flow to RDF.
The RDf can then be managed in triple stores or as in this case written to files.
The RDFizer is itself model driven as it uses an RDF r2rml configuration file.
The talend job can be deployed as a stand alone java executable or deployed as a web service within your architecture.
Foundation Platform: Talend
Gartner Magic Quadrant
Open Studio and enterprise versions
Composable visual java development environment
Solution frameworks for
Integration, BPM, MDM, ESB, Data Quality, Big data
Configuration
1000’s of module to configure into applications
ETL, Amazon Cloud, Hadoop, BI
Modules are java injection routines
Well supported community
Highly scalable efficient code generation
Deployable as within service architectures
Adds to your existing architecture
Not a rip and replace!
BUT Lacks any knowledge of Semantic data handling and management