A set of practical strategies and techniques for tackling vagueness in data modeling and creating models that are semantically more accurate and interoperable.
Troubleshooting and Optimizing Named Entity Resolution Systems in the IndustryPanos Alexopoulos
Named Entity Resolution (NER) is an information extraction task that involves detecting mentions of named entities within texts and mapping them to
their corresponding entities in a given knowledge resource. Systems and frameworks for performing NER have been developed both by the academia and the industry with different features and capabilities. Nevertheless, what all approaches have in common is that their satisfactory performance in a given scenario does not constitute a trustworthy predictor of their performance in a different one, the reason being the scenario’s different characteristics (target entities, input texts, domain knowledge etc.). With that in mind, we describe a metric-based Diagnostic Framework that can be used to identify the causes behind the low performance of NER systems in industrial settings and take appropriate actions to increase it.
The phenomenon of vagueness, manifested by terms and concepts like Tall, Red, Modern, etc., is quite common in human knowledge and it is related to our inability to precisely determine the extensions of such terms due to their blurred applicability boundaries. In the context of Ontologies and Semantic Web, vagueness is primarily treated by means of Fuzzy Ontologies, namely extensions of classical ontologies that apply truth degrees to vague ontological elements in an effort to quantify their vagueness and reason with it. Nevertheless, while a number of fuzzy conceptual formalisms and fuzzy ontology language extensions for representing vagueness in ontologies have been proposed by the community, the methodological issues entailed within the development process of such ontologies have been rather neglected. In this talk we position vagueness within the overall lifecycle of semantic information management and we present IKARUS-Onto, a methodology for engineering fuzzy ontologies that covers all typical ontology development stages, from specification to validation.
Learning Vague Knowledge From Socially Generated Content in an Enterprise Fra...Panos Alexopoulos
The advent and wide proliferation of Social Web in the re-
cent years has promoted the concept of social interaction as an important influencing factor of the way enterprises and organizations conduct business. Among the fields influenced is that of Enterprise Knowledge Management, where adoption of social computing approaches aims at increasing and maintaining at high levels the active participation of users in the organization's knowledge management activities. An important challenge towards this is the achievement of the right balance between informalities of socially generated data and the required formality of enterprise knowledge. In this context, we focus on the problem of mining vague knowledge from social content generated within an enterprise framework and we propose a learning framework based on microblogging and fuzzy ontologies.
The emergence in recent years of initiatives like the Linked Open Data (LOD) has led to a significant increase in the amount of structured semantic data on the Web. In this paper we argue that the shareability and wider reuse of such data can very often be hampered by the existence of vagueness within it, as this makes the data’s meaning less explicit. Moreover, as a way to reduce this problem,
we propose a vagueness metaontology that may represent in an explicit way the nature and characteristics of vague elements within semantic data.
Troubleshooting and Optimizing Named Entity Resolution Systems in the IndustryPanos Alexopoulos
Named Entity Resolution (NER) is an information extraction task that involves detecting mentions of named entities within texts and mapping them to
their corresponding entities in a given knowledge resource. Systems and frameworks for performing NER have been developed both by the academia and the industry with different features and capabilities. Nevertheless, what all approaches have in common is that their satisfactory performance in a given scenario does not constitute a trustworthy predictor of their performance in a different one, the reason being the scenario’s different characteristics (target entities, input texts, domain knowledge etc.). With that in mind, we describe a metric-based Diagnostic Framework that can be used to identify the causes behind the low performance of NER systems in industrial settings and take appropriate actions to increase it.
The phenomenon of vagueness, manifested by terms and concepts like Tall, Red, Modern, etc., is quite common in human knowledge and it is related to our inability to precisely determine the extensions of such terms due to their blurred applicability boundaries. In the context of Ontologies and Semantic Web, vagueness is primarily treated by means of Fuzzy Ontologies, namely extensions of classical ontologies that apply truth degrees to vague ontological elements in an effort to quantify their vagueness and reason with it. Nevertheless, while a number of fuzzy conceptual formalisms and fuzzy ontology language extensions for representing vagueness in ontologies have been proposed by the community, the methodological issues entailed within the development process of such ontologies have been rather neglected. In this talk we position vagueness within the overall lifecycle of semantic information management and we present IKARUS-Onto, a methodology for engineering fuzzy ontologies that covers all typical ontology development stages, from specification to validation.
Learning Vague Knowledge From Socially Generated Content in an Enterprise Fra...Panos Alexopoulos
The advent and wide proliferation of Social Web in the re-
cent years has promoted the concept of social interaction as an important influencing factor of the way enterprises and organizations conduct business. Among the fields influenced is that of Enterprise Knowledge Management, where adoption of social computing approaches aims at increasing and maintaining at high levels the active participation of users in the organization's knowledge management activities. An important challenge towards this is the achievement of the right balance between informalities of socially generated data and the required formality of enterprise knowledge. In this context, we focus on the problem of mining vague knowledge from social content generated within an enterprise framework and we propose a learning framework based on microblogging and fuzzy ontologies.
The emergence in recent years of initiatives like the Linked Open Data (LOD) has led to a significant increase in the amount of structured semantic data on the Web. In this paper we argue that the shareability and wider reuse of such data can very often be hampered by the existence of vagueness within it, as this makes the data’s meaning less explicit. Moreover, as a way to reduce this problem,
we propose a vagueness metaontology that may represent in an explicit way the nature and characteristics of vague elements within semantic data.
This presentation introduces text analytics, its applications and various tools/algorithms used for this process. Given below are some of the important tools:
- Decision trees
- SVM
- Naive-Bayes
- K-nearest neighbours
- Artificial Neural Networks
- Fuzzy C-Means
- Latent Dirichlet Allocation
This is an introduction to text analytics for advanced business users and IT professionals with limited programming expertise. The presentation will go through different areas of text analytics as well as provide some real work examples that help to make the subject matter a little more relatable. We will cover topics like search engine building, categorization (supervised and unsupervised), clustering, NLP, and social media analysis.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
Module 9: Natural Language Processing Part 2Sara Hooker
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org. If you would like to use this material to further our mission of improving access to machine learning. Education please reach out to inquiry@deltanalytics.org .
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Sri Ambati
This meetup took place in Mountain View on January 24th, 2019.
Description:
With the effort and contributions from researchers and practitioners from academia and industry, Machine Learning Interpretation has become a young sub-field of ML. However, the norms around its definition and understanding is still in its infancy and there are numerous different approaches emerging rapidly. However, there seems to be a lack of a consistent explanation framework to evaluate and consistently benchmark different algorithms - evaluating against interpretation, completeness and consistency of the algorithms.
The idea with the gym is to provide a controlled interactive environment for all forms of Machine Learning algorithms, - initially focusing on supervised predictive modeling problems, to allow analysts and data-scientists to explore, debug and generate insightful understanding of the models by
1.Model Validation: Ways to explore and validate black box ML systems enabling model comparison both globally and locally - identifying biases in the training data through interpretation.
2.What-if Analysis: An interactive environment where communication can happen i.e. enable learning through interactions. User having the ability to conduct "What-If" analysis - effect of single or multiple features and their interactions
3.Model Debugging: Ways to analyze the misbehavior of the model by exploring counterfactual examples(adversarial examples and training)
4. Interpretable Models: Ability to build natively interpretable models - with the goal to simplify complex models to enable better understanding.
The central concept with MLI gym is to have an interactive environment where one could explore and simulate variations in the world(a world post a model is operationalized) beyond the defined model metrics point estimates - e.g. ROC-AUC, confusion matrix, RMSE, R2 score and others.
Speaker's Bio:
Pramit is a Lead Data Scientist/ at H2O.ai. His area of interests is building Statistical/Machine Learning models(Bayesian and Frequentist Modeling techniques) to help the business realize their data-driven goals.
Currently, he is exploring "Model Interpretation" as means to efficiently understand the true nature of predictive models to enable model robustness and security. He believes effective Model Inference coupled with Adversarial training could lead to building trustworthy models with known blind spots. He has started an open source project Skater: https://github.com/datascienceinc/Skater to solve the need for Model Inference(The project is still in its early stages of development but check it out, always eager for feedback)
This talk was presented by Mrs. Dorothea Wisemann, Department Head Cognitive Computing & Industry Solutions at IBM Resarch - Zurich, during Data Science Conference 4.0, as a keynote talk.
More info about Data Science Conference:
Website: http://datasciconference.com
Instagram: https://www.instagram.com/datasciconf/
Facebook: https://www.facebook.com/DataSciConference/
Twitter: https://twitter.com/datasciconf
Flickr: https://www.flickr.com/photos/data-science-conference
These slides cover the final defense presentation for my Doctorate degree. Th...Eric Brown
These slides cover the final defense presentation for my Doctorate degree. The topic: Analysis of Twitter Messages for Sentiment and Insight for use in Stock Market Decision Making.
Twitter Sentiment & Investing - modeling stock price movements with twitter s...Eric Brown
In this presentation, I provide an overview of my research into using twitter sentiment and message volume as inputs into modeling stock price movements. A quick and dirty linear regression model using Twitter Sentiment, the Number of Tweets per day, the VIX Closing price and the VIX Price change delivers a simple model for the S&P 500 SPY ETF that has an accuracy of 57% over 6 months (tested on out-of sample data). This model was built using data from July 11 2011 to August 11 2011.
FREE 5+ Self-Assessment Essay Samples in MS Word | PDF. FREE 9+ Self Assessment Samples in PDF | MS Word | Excel. Self-Assessment Essay Example | Topics and Well Written Essays - 500 words.
This presentation introduces text analytics, its applications and various tools/algorithms used for this process. Given below are some of the important tools:
- Decision trees
- SVM
- Naive-Bayes
- K-nearest neighbours
- Artificial Neural Networks
- Fuzzy C-Means
- Latent Dirichlet Allocation
This is an introduction to text analytics for advanced business users and IT professionals with limited programming expertise. The presentation will go through different areas of text analytics as well as provide some real work examples that help to make the subject matter a little more relatable. We will cover topics like search engine building, categorization (supervised and unsupervised), clustering, NLP, and social media analysis.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
Module 9: Natural Language Processing Part 2Sara Hooker
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org. If you would like to use this material to further our mission of improving access to machine learning. Education please reach out to inquiry@deltanalytics.org .
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Sri Ambati
This meetup took place in Mountain View on January 24th, 2019.
Description:
With the effort and contributions from researchers and practitioners from academia and industry, Machine Learning Interpretation has become a young sub-field of ML. However, the norms around its definition and understanding is still in its infancy and there are numerous different approaches emerging rapidly. However, there seems to be a lack of a consistent explanation framework to evaluate and consistently benchmark different algorithms - evaluating against interpretation, completeness and consistency of the algorithms.
The idea with the gym is to provide a controlled interactive environment for all forms of Machine Learning algorithms, - initially focusing on supervised predictive modeling problems, to allow analysts and data-scientists to explore, debug and generate insightful understanding of the models by
1.Model Validation: Ways to explore and validate black box ML systems enabling model comparison both globally and locally - identifying biases in the training data through interpretation.
2.What-if Analysis: An interactive environment where communication can happen i.e. enable learning through interactions. User having the ability to conduct "What-If" analysis - effect of single or multiple features and their interactions
3.Model Debugging: Ways to analyze the misbehavior of the model by exploring counterfactual examples(adversarial examples and training)
4. Interpretable Models: Ability to build natively interpretable models - with the goal to simplify complex models to enable better understanding.
The central concept with MLI gym is to have an interactive environment where one could explore and simulate variations in the world(a world post a model is operationalized) beyond the defined model metrics point estimates - e.g. ROC-AUC, confusion matrix, RMSE, R2 score and others.
Speaker's Bio:
Pramit is a Lead Data Scientist/ at H2O.ai. His area of interests is building Statistical/Machine Learning models(Bayesian and Frequentist Modeling techniques) to help the business realize their data-driven goals.
Currently, he is exploring "Model Interpretation" as means to efficiently understand the true nature of predictive models to enable model robustness and security. He believes effective Model Inference coupled with Adversarial training could lead to building trustworthy models with known blind spots. He has started an open source project Skater: https://github.com/datascienceinc/Skater to solve the need for Model Inference(The project is still in its early stages of development but check it out, always eager for feedback)
This talk was presented by Mrs. Dorothea Wisemann, Department Head Cognitive Computing & Industry Solutions at IBM Resarch - Zurich, during Data Science Conference 4.0, as a keynote talk.
More info about Data Science Conference:
Website: http://datasciconference.com
Instagram: https://www.instagram.com/datasciconf/
Facebook: https://www.facebook.com/DataSciConference/
Twitter: https://twitter.com/datasciconf
Flickr: https://www.flickr.com/photos/data-science-conference
These slides cover the final defense presentation for my Doctorate degree. Th...Eric Brown
These slides cover the final defense presentation for my Doctorate degree. The topic: Analysis of Twitter Messages for Sentiment and Insight for use in Stock Market Decision Making.
Twitter Sentiment & Investing - modeling stock price movements with twitter s...Eric Brown
In this presentation, I provide an overview of my research into using twitter sentiment and message volume as inputs into modeling stock price movements. A quick and dirty linear regression model using Twitter Sentiment, the Number of Tweets per day, the VIX Closing price and the VIX Price change delivers a simple model for the S&P 500 SPY ETF that has an accuracy of 57% over 6 months (tested on out-of sample data). This model was built using data from July 11 2011 to August 11 2011.
FREE 5+ Self-Assessment Essay Samples in MS Word | PDF. FREE 9+ Self Assessment Samples in PDF | MS Word | Excel. Self-Assessment Essay Example | Topics and Well Written Essays - 500 words.
Don't let data get in the way of a good storymark madsen
Storytelling is not about raising someone’s IQ, it’s about raising their blood pressure. Stories engage emotions rather than intellect, making “storytelling with data” a poor metaphor for data visualization when our goal is to communicate clearly.
People are often confused or misled by “story”, thinking they need a classical story structure with protagonists, action and resolution when the job may be simpler, or more complicated. Some of the storytelling tools and suggestions vendors promote would get you kicked out of your boss’s office you used them without taking into account their goals and context.
Narrative is what we are really talking about, not story. We need to focus our attention on narrative techniques rather than “story” and its forced linear structure. This means understanding why we want to communicate: is it to explain, to build shared understanding, to convince others that our interpretation is the right one?
We use visualization as a tool for many different purposes, communication being one. The idea of narratives with data is a good one, but not all narrative is story. The purpose of this talk is to provide clarity around the goals of communicating with data and to provide a goal-oriented framework that escapes the bad metaphorical frame imposed by “storytelling”.
"I don't trust AI": the role of explainability in responsible AIErika Agostinelli
This Tech talk was part of the Women in Data Science 2021 Bristol Event. An Introduction to Explainable AI: what to consider when developing an explainable strategy, what are the state of the art techniques and open-source tools used in the field, with concrete examples. To see the recording: https://www.crowdcast.io/e/o4gjxatp
Step Up Your Survey Research - Dawn of the Data Age Lecture SeriesLuciano Pesci, PhD
Most surveys are terrible. From poorly designed questions, to incoherent survey flow, to useless results, it’s no wonder data-driven organizations have so little faith in survey research. But this isn’t the fault of the tool, it’s because most surveys are built without adhering to some basic best practices, which once fixed can transform any survey from a zero to a hero. This lecture will show you how to create data-science quality surveys that provide unique and immediately actionable insight about your customers, competitors, and marketplace.
This Lecture Will:
-EXPLAIN THE DATA SCIENCE APPROACH TO SURVEY LAYOUT AND QUESTION DESIGN.
-HOW TO INCREASE RESPONSE AND COMPLETION RATES THROUGH ITERATIVE TESTING.
-LINKING SURVEY RESULTS TO OTHER DATA SOURCES TO ENRICH YOUR ANALYSIS.
You can watch this lecture here: https://youtu.be/WuBenXuVzqc
Module 4: Model Selection and EvaluationSara Hooker
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org.
Fuzzy modeling is a powerful approach found by Zadeh for the modeling of complex and uncertain systems [2]. Fuzzy logic has a distinctive advantage where the precise definition of a control process is unachievable. Fuzzy models have the ability to establish a relationship between input and output variables by employing predefined rules. The technique provides simple solutions which are based on natural language statements. Fuzzy logic takes the inputs and outputs in the form of fuzzy sets where each set contains elements that have varying degrees of membership. A fuzzy set then enables transforming real numbers to the membership degrees changing from 0 to 1. Fuzzy rules relate input variables to output variables. These rules represent the expert knowledge in the system. Indeed, the intuition behind fuzzy logic is, it works with perception-based data instead of measurement-based which are crisp and numeric. Hence, it tries to capture how human use perceptions of time, direction, speed, shape, possibility, likelihood, truth, and other attributes of physical and mental objects. Perceptions in this manner are inherently imprecise when compared to crisp values, for example, a human might express his intuition about the weather as being not very hot while a sensor would read the heat in degrees and give us a crisp value. Therefore, perceptions are very subjective and reflect the partiality of human concepts.
In 2001, Prof. Zadeh proposed his computational theory of perceptions (CTP) where the objects of computations are words and propositions drawn from natural language rather than crisp numeric values. The idea of the theory came due to the unavailability of a methodology for reasoning and computing with perceptions rather than measurements. Hence, the CPT was the ground for allowing a computer to make subjective judgments which often refered as perceptual computing.
E.H. Mamdani, Application of fuzzy algorithms for control of simple dynamic plant, in: Proceedings of the Institution of Electrical Engineers, IET, 1974, pp. 1585-1588.
Zadeh, Lotfi A. "Fuzzy sets." Information and control 8, no. 3 (1965): 338-353.
Zadeh, Lotfi A. "A new direction in AI: Toward a computational theory of perceptions." AI magazine 22, no. 1 (2001): 73.
Semantic interoperability is often an afterthought. QSi is proposing a radical shift in the way we currently view the nature and relationship between Information, Language, and Data. In the process, semantic interoperability is an emergent characteristic of data management.
Understanding Users Through Ethnography and Modeling - STC Summit 2010Jim Jarrett
90 minute training for experienced practitioners in best practices for analyzing and modeling qualitative user research, including KJ Analysis, personas, and scenarios. Tips and tricks and techniques included. Presented at the STC Summit 2010 on 3 May 2010.
The Role of Agent-Based Modelling in Extending the Concept of Bounded Rationa...Edmund Chattoe-Brown
A seminar given to the Judgement and Decision Making Research Group in the Department of Neuroscience, Psychology and Behaviour, University of Leicester kindly asked me to give a seminar on 25 January 2023 on "The Role of Agent-Based Modelling in Extending the Concept of Bounded Rationality". It discusses the challenges to different research methods of dealing with subjective accounts and models a situation where people can be rational but communicate and have incomplete information about both the number of choices and their payoff. The model is based on this paper: https://doi.org/10.1007/s11299-009-0060-7 One interesting result is that, without coercion or mass media, minority groups may be disadvantaged in their decision making by hegemonic discourse.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
How many truths can you handle?
1. Panos Alexopoulos
Data and Knowledge Technologies
Professional
http://www.panosalexopoulos.com
p.alexopoulos@gmail.com
@PAlexop
How many truths can you handle?
Strategies and techniques for handling vagueness in
conceptual data models
7. What is it and how it differs
from other phenomena
Guidelines and (automatic)
techniques
Approaches and trade-offs
Why you should care Metrics and methods
Topics covered
UNDERSTANDING
VAGUENESS
DETECTING
VAGUENESS
TACKLING
VAGUENESS
VAGUENESS
RAMIFICATIONS
MEASURING
VAGUENESS
9. The Sorites Paradox
● 1 grain of wheat does not make a heap.
● If 1 grain doesn’t make a heap, then 2 grains
don’t.
● If 2 grains don’t make a heap, then 3 grains
don’t.
● …
● If 999,999 grains don’t make a heap, then 1
million grains don’t.
● Therefore, 1 million grains don’t make a
heap!
10. What is vagueness
“Vagueness is a semantic
phenomenon where predicates admit
borderline cases, namely cases where
it is not determinately true that the
predicate applies or not”
—Shapiro 2006
11. What is not vagueness
AMBIGUITY
E.g., “Last week I visited
Tripoli”
INEXACTNESS
E.g., “My height is
between 165 and 175
cm”
UNCERTAINTY
E.g., “The temperature
in Amsterdam right now
might be 15 degrees”,
12. Vagueness Types
QUANTITATIVE
Borderline cases stem from the
lack of precise boundaries along
some measurable dimension
(e.g. “Bald”, “Tall”, “Near”)
QUALITATIVE
Borderline cases stem from not
being able to decide which
dimensions and conditions are
sufficient and/or necessary for
the predicate to apply. (e.g.,
“Religion”, “Expert”)
21. How to detect vagueness
● Identify which of your data model’s
elements are vague
● Investigate whether these elements are
indeed vague.
● Investigate and determine potential
dimensions and applicability contexts.
22. Where to look
● Classes: E.g. “Tall Person”, “Strategic
Customer”, “Experienced Researcher”
● Relations and attributes: E.g., “hasGenre”,
“hasIdeology”
● Attribute values: E.g., the “price” of a
restaurant could take as values the vague
terms “cheap”, “moderate” and “expensive”
23. What to look for
● Vague terms in names and definitions
● Disagreements and inconsistencies among
data modelers, domain experts, and data
stewards during model development and
maintenance
● Disagreements and inconsistencies in user
feedback during model application.
24. Examples from Wordnet
Vague senses Non vague senses
Yellowish: of the color intermediate
between green and orange in the color
spectrum, of something resembling the
color of an egg yolk.
Compound: composed of more than one
part
Impenitent: impervious to moral persuasion Biweekly: occurring every two weeks.
Notorious: known widely and usually
unfavorably
Outermost: situated at the farthest possible
point from a center.
25. Examples from the Citation Ontology
Vague relations Non vague relations
plagiarizes: A property indicating that the
author of the citing entity plagiarizes
the cited entity, by including textual or other
elements from the cited entity
without formal acknowledgement of their
source.
sharesAuthorInstitutionWith: Each entity
has at least one author that shares a
common institutional affiliation with an
author of the other entity.
citesAsAuthority: The citing entity cites the
cited entity as one that provides an
authoritative description or definition of the
subject under discussion.
retracts: The citing entity constitutes a
formal retraction of the cited entity.
supports: The citing entity provides
intellectual or factual support for
statements, ideas or conclusions presented
in the cited entity.
includesExcerptFrom: The citing entity
includes one or more excerpts from the
cited entity.
27. Vagueness spread
● The ratio of model elements (classes,
relations, datatypes, etc) that are vague
● A data model with a high vagueness spread
is less explicit and shareable than an
ontology with a low one.
28. Vagueness intensity
● The degree to which the model’s users disagree
on the validity of the (potential) instances of the
elements.
● The higher this disagreement is for an element,
the more problems the element is likely to cause.
● Calculation:
○ Consider a sample set of vague element
instances
○ Have human judges denote whether and to
what extent they believe these instances are
valid
○ Measure the inter-agreement between users
(e.g. by using Cohen’s kappa)
31. Vagueness-aware data models
Data models whose vague elements
are accompanied by meta-information
that describes the nature and
characteristics of their vagueness in
an explicit way.
32. E.g. “Tall Person” is vague and
“Adult” is non-vague
E.g. “Strategic Client" is vague
in the dimension of the
generated revenue”
E.g. “Strategic Client" is vague
in the dimension of the
generated revenue according
to the Financial Manager.
E.g. “Low Budget” has
quantitative vagueness and
“Expert Consultant” qualitative.
E.g. “Strategic Client" is vague
in the dimension of the
generated revenue in the
context of Financial Reporting”
What to make explicit
VAGUENESS EXISTENCE VAGUENESS DIMENSIONS
VAGUENESS
PROVENANCE
VAGUENESS TYPE
APPLICABILITY
CONTEXTS
34. Truth contextualization
● The same statement in the data model can be true
in some contexts and false in other contexts.
● E.g., “Stephen Curry is short” is true in the context
of “Basketball Playing” but false in all others.
● Potential contexts:
○ Cultures
○ Locations
○ Industries
○ Processes
○ Demographics
○ ...
36. When to contextualize?
● When vagueness intensity is high and consensus
is impossible
● When you are able to identify truth contexts
● When the applications that use the model
applications can actually handle the contexts.
● When contextualization actually manages to
reduce disagreements and have a positive effect
to the model’s applications.
● When the contextualization benefits outweigh the
context management overhead.
37. Truth fuzzification
● The basic idea is that we can assign a real number
to a vague statement, within a range from 0 to 1.
○ A value of 1 would mean that the statement
is completely true
○ A value of 0 that it is completely false
○ Any value in between that it is “partly true” to
a given, quantifiable extent.
● For example:
○ “John is an instance of YoungPerson to a
degree of 0.8”
○ “Google has Competitor Microsoft B to a
degree of 0.4”.
● The premise is that fuzzy degrees can reduce the
disagreements around the truth of a vague
statement.
38. Truth degrees are not
probabilities● A probability statement is about quantifying the
likelihood of events or facts whose truth conditions
are well defined to come true
○ e.g., “it will rain tomorrow with a probability of
0.8”
● A fuzzy statement is about quantifying the extent to
which events or facts whose truth conditions are
undefined to be perceived as true.
○ e.g., “It’s now raining to a degree of 0.6”
● That’s the reason why they are supported by different
mathematical frameworks, namely probability theory
and fuzzy logic
39. What fuzzification involves
1. Detect and analyze all vague elements in your
model
1. Decide how to fuzzify each element
1. Harvest truth degrees
1. Assess fuzzy model quality
1. Represent fuzzy degrees
1. Apply the fuzzy model
40. Fuzzification options
● The number and kind of fuzzy degrees you
need to acquire for your model’s vague
elements depend on the latter’s vagueness
type and dimensions.
● If your element has quantitative vagueness
in one dimension, then all you need is a
fuzzy membership function that maps
numerical values of the dimension to fuzzy
degrees in the range [0,1]
43. Fuzzification options
● If an element has quantitative vagueness in
more than one dimensions then you can
either:
○ Define a multivariate fuzzy
membership function
○ Define one membership function per
dimension and then combine these via
some fuzzy logic operation, like fuzzy
conjunction or fuzzy disjunction
46. Fuzzification options
● A third option is to just define one direct degree
per statement.
○ “John is tall to a degree of 0.8”
○ “Maria is expert in data modeling to a degree
of 0.6”
● This approach makes sense when:
○ Your element is vague in too many
dimensions and you cannot find a proper
membership function,
○ When the element’s vagueness is qualitative
and, thus, you have no dimensions to use.
● The drawback is that you will have to harvest a lot
of degrees!
47. Harvesting truth degrees
● Remember that vague statements provoke
disagreements and debates among people or
even among people and systems.
● To generate fuzzy degrees for these statements
you need practically to capture and quantify these
disagreements.
● How to capture:
○ Ask people directly
○ Ask people indirectly
○ Mine from data
49. Multiple fuzzy truths
● Even with fuzzification you still may be getting
disagreements
● This can be an indication of context-dependence
● Different contexts may require different fuzzy
degrees or membership functions
● In other words, contextualization and fuzzification
are orthogonal approaches.
50. Fuzzy model quality
● Main questions you need to consider:
○ Have I fuzzified the correct elements?
○ Are the truth degrees consistent?
○ Are the truth degrees accurate?
○ Is the provenance of the truth degrees well
documented?
● Both accuracy and consistency are best treated
not as a binary metric but rather as a distance
51. Fuzzy model representation
● To represent a truth degree for a relation you
simply need to define a relation attribute named
“truth degree” or similar.
● This is straightforward if you work with E-R
models or property graphs, but also possible in
RDF or OWL, even if these languages do not
directly support relation attributes.
● Things can become more difficult when you need
to represent fuzzy membership functions or more
complex fuzzy rules and axioms, along with their
necessary reasoning support.
52. Fuzzy model application
● This last step might not look like a semantic
modeling task, yet it is a crucial one if you want
your fuzzification effort to pay off
● A fuzzy data model can be helpful in:
○ Semantic tagging and disambiguation
○ Semantic search and match
○ Decision support systems
○ Conversational agents (aka chatbots)
● In both cases proper design and adaptation of
the underlying algorithms is needed
53. When to fuzzify?
● Questions you need to consider:
○ Which elements in your model are
unavoidably vague?
○ How severe and impactful are the
disagreements you (expect to) get on the
veracity of these vague elements?
○ Are these disagreements caused by
vagueness or other factors?
54. When to fuzzify?
● Questions you need to consider:
○ If your model’s elements had fuzzy degrees,
would you get less disagreement?
○ Are the applications that use the model able
to exploit and benefit from truth degrees?
○ Can you develop a scalable way to get and
maintain fuzzy degrees that costs less than
the benefits they bring you?
58. ● (Perceived) inaccuracy
● Disagreements and
misinterpretations
● Reduced semantic
interoperability
Take Aways
Data and information
quality can be negatively
affected by vagueness
● It’s how we think and
communicate
● Insisting on crispness is
unproductive
● But leaving things as-is is
also bad.
Treating vagueness as
noise doesn’t help
● Make your data models
Vagueness-Aware
● Contextualize truth
● Fuzzify truth
Three complementary
weapons to tackle
vagueness
59.
60. Currently writing a book on
semantic data modeling
To be published by O’Reilly in
September 2020
Early release expected at O’Reilly
Learning Platform in December
2019
To get news about the book
progress and a free preview
chapter send me an email to
p.alexopoulos@gmail.com