Slides presented at ICWE 2011: Learning Semantic Relationships between Entities in Twitter
Supporting web site: http://wis.ewi.tudelft.nl/icwe2011/relation-learning/
A network based model for predicting a hashtag break out in twitter Sultan Alzahrani
Online information propagates differently on the web, some
of which can be viral. In this paper, first we introduce a simple standard deviation sigma levels based Tweet volume breakout definition, then we proceed to determine patterns of re-tweet network measures to predict whether a hashtag volume will breakout or not. We also developed a visualization tool to help trace the evolution of hashtag volumes, their underlying networks and both local and global network measures. We trained a random forest tree classifier to identify effective network measures for predicting hashtag volume breakouts. Our experiments showed that “local” network features, based on a fixed-sized sliding window, have an overall predictive accuracy of 76 %, where as, when we incorporate “global” features that utilize all interactions up to the current period, then the overall predictive accuracy of a sliding window based breakout predictor jumps to 83 %.
Faking Sandy: Characterizing and Identifying Fake Images on Twitter during Hu...IIIT Hyderabad
In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.
Presentation for tutorial session 'Measuring scholarly impact: Methods and practice' at ISSI2015
Explains how to use linkpred: https://github.com/rafguns/linkpred
A network based model for predicting a hashtag break out in twitter Sultan Alzahrani
Online information propagates differently on the web, some
of which can be viral. In this paper, first we introduce a simple standard deviation sigma levels based Tweet volume breakout definition, then we proceed to determine patterns of re-tweet network measures to predict whether a hashtag volume will breakout or not. We also developed a visualization tool to help trace the evolution of hashtag volumes, their underlying networks and both local and global network measures. We trained a random forest tree classifier to identify effective network measures for predicting hashtag volume breakouts. Our experiments showed that “local” network features, based on a fixed-sized sliding window, have an overall predictive accuracy of 76 %, where as, when we incorporate “global” features that utilize all interactions up to the current period, then the overall predictive accuracy of a sliding window based breakout predictor jumps to 83 %.
Faking Sandy: Characterizing and Identifying Fake Images on Twitter during Hu...IIIT Hyderabad
In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.
Presentation for tutorial session 'Measuring scholarly impact: Methods and practice' at ISSI2015
Explains how to use linkpred: https://github.com/rafguns/linkpred
Contactually & Encore Alert: Top Tools to Engage Your Prospects & Close More ...Contactually
View webinar here: https://contactually.wistia.com/medias/yql2zk1675
We're partnering with our friends at Encore Alert to walk through best practices and top tools you can use to better engage your prospects and close more business.
Want to know what tools to use?
Join Contactually and Encore Alert on Wednesday, July 2nd at 1pm EST to learn:
Why the best companies leverage social media to close sales
Tools to start finding & engaging your social prospects
How to add value to your prospects & close the sale
This presentation touches on how organizations can better serve mobile users without spending a lot of time (or budget). I presented it at Denton NewsTrain in September 2018 and Denver NewsTrain in April 2019.
Northwestern University IPHAM Twitter Basics WorkshopRoger Knight
Are you part of an academic medical center and you are curious about Twitter but don't know where to start? Have you created a Twitter account but never use it? Do you want to know how to use Twitter to engage, collaborate and disseminate your research? Then, this Twitter Basics Workshop presentation is for you! If you have any questions, please contact Roger Knight at rknight@northwestern.edu or @chicagopana
Northeastern's Corporate and Organizational Communication graduate students: Jim Cooney, Angelika Seaman, Andy Morse, Neha Hamzah, Rachel Gardner, & Yuting Dong share their social media analysis for Akvo to improve engagement and awareness.
Modeling Spread of Disease from Social InteractionsPrashanth Selvam
Research paper explanation of Modeling Spread of Disease from Social Interactions. A well defined PowerPoint with details on the methodology used and the observations made.
WWW2010_Earthquake Shakes Twitter User: Analyzing Tweets for Real-Time Event...tksakaki
Twitter, a popular microblogging service, has received much
attention recently. An important characteristic of Twitter
is its real-time nature. For example, when an earthquake
occurs, people make many Twitter posts (tweets) related
to the earthquake, which enables detection of earthquake
occurrence promptly, simply by observing the tweets. As
described in this paper, we investigate the real-time interaction
of events such as earthquakes in Twitter and propose
an algorithm to monitor tweets and to detect a target
event. To detect a target event, we devise a classifier of
tweets based on features such as the keywords in a tweet,
the number of words, and their context. Subsequently, we
produce a probabilistic spatiotemporal model for the target
event that can find the center and the trajectory of the
event location. We consider each Twitter user as a sensor
and apply Kalman filtering and particle filtering, which are
widely used for location estimation in ubiquitous/pervasive
computing. The particle filter works better than other comparable
methods for estimating the centers of earthquakes
and the trajectories of typhoons. As an application, we construct
an earthquake reporting system in Japan. Because of
the numerous earthquakes and the large number of Twitter
users throughout the country, we can detect an earthquake
with high probability (96% of earthquakes of Japan Meteorological
Agency (JMA) seismic intensity scale 3 or more
are detected) merely by monitoring tweets. Our system detects
earthquakes promptly and sends e-mails to registered
users. Notification is delivered much faster than the announcements
that are broadcast by the JMA.
A slide deck discussing the results of my semester-long analysis on the hashtag "fake news". Within the deck is a compilation of statistical charts to offer ideas on the significance of this hashtag, as well as a deep dive into the social dynamics attached to this topic.
Information Contagion through Social Media: Towards a Realistic Model of the ...Axel Bruns
Paper by Axel Bruns, Patrik Wikström, Peta Mitchell, Brenda Moon, Felix Münch, Lucia Falzon, and Lucy Resnyansky presented at the ACSPRI 2016 conference, Sydney, 19-22 July 2016/
Digging for data: opportunities and challenges in an open research landscape_...Platforma Otwartej Nauki
“Open Research Data: Implications for Science and Society”, Warsaw, Poland, May 28–29, 2015, conference organized by the Open Science Platform — an initiative of the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw. pon.edu.pl @OpenSciPlatform #ORD2015
The emerging field of computational social science (CSS) is devoted to the pursuit of interdisciplinary social science research from an information processing perspective, through the medium of advanced computing and information technologies.
Contactually & Encore Alert: Top Tools to Engage Your Prospects & Close More ...Contactually
View webinar here: https://contactually.wistia.com/medias/yql2zk1675
We're partnering with our friends at Encore Alert to walk through best practices and top tools you can use to better engage your prospects and close more business.
Want to know what tools to use?
Join Contactually and Encore Alert on Wednesday, July 2nd at 1pm EST to learn:
Why the best companies leverage social media to close sales
Tools to start finding & engaging your social prospects
How to add value to your prospects & close the sale
This presentation touches on how organizations can better serve mobile users without spending a lot of time (or budget). I presented it at Denton NewsTrain in September 2018 and Denver NewsTrain in April 2019.
Northwestern University IPHAM Twitter Basics WorkshopRoger Knight
Are you part of an academic medical center and you are curious about Twitter but don't know where to start? Have you created a Twitter account but never use it? Do you want to know how to use Twitter to engage, collaborate and disseminate your research? Then, this Twitter Basics Workshop presentation is for you! If you have any questions, please contact Roger Knight at rknight@northwestern.edu or @chicagopana
Northeastern's Corporate and Organizational Communication graduate students: Jim Cooney, Angelika Seaman, Andy Morse, Neha Hamzah, Rachel Gardner, & Yuting Dong share their social media analysis for Akvo to improve engagement and awareness.
Modeling Spread of Disease from Social InteractionsPrashanth Selvam
Research paper explanation of Modeling Spread of Disease from Social Interactions. A well defined PowerPoint with details on the methodology used and the observations made.
WWW2010_Earthquake Shakes Twitter User: Analyzing Tweets for Real-Time Event...tksakaki
Twitter, a popular microblogging service, has received much
attention recently. An important characteristic of Twitter
is its real-time nature. For example, when an earthquake
occurs, people make many Twitter posts (tweets) related
to the earthquake, which enables detection of earthquake
occurrence promptly, simply by observing the tweets. As
described in this paper, we investigate the real-time interaction
of events such as earthquakes in Twitter and propose
an algorithm to monitor tweets and to detect a target
event. To detect a target event, we devise a classifier of
tweets based on features such as the keywords in a tweet,
the number of words, and their context. Subsequently, we
produce a probabilistic spatiotemporal model for the target
event that can find the center and the trajectory of the
event location. We consider each Twitter user as a sensor
and apply Kalman filtering and particle filtering, which are
widely used for location estimation in ubiquitous/pervasive
computing. The particle filter works better than other comparable
methods for estimating the centers of earthquakes
and the trajectories of typhoons. As an application, we construct
an earthquake reporting system in Japan. Because of
the numerous earthquakes and the large number of Twitter
users throughout the country, we can detect an earthquake
with high probability (96% of earthquakes of Japan Meteorological
Agency (JMA) seismic intensity scale 3 or more
are detected) merely by monitoring tweets. Our system detects
earthquakes promptly and sends e-mails to registered
users. Notification is delivered much faster than the announcements
that are broadcast by the JMA.
A slide deck discussing the results of my semester-long analysis on the hashtag "fake news". Within the deck is a compilation of statistical charts to offer ideas on the significance of this hashtag, as well as a deep dive into the social dynamics attached to this topic.
Information Contagion through Social Media: Towards a Realistic Model of the ...Axel Bruns
Paper by Axel Bruns, Patrik Wikström, Peta Mitchell, Brenda Moon, Felix Münch, Lucia Falzon, and Lucy Resnyansky presented at the ACSPRI 2016 conference, Sydney, 19-22 July 2016/
Digging for data: opportunities and challenges in an open research landscape_...Platforma Otwartej Nauki
“Open Research Data: Implications for Science and Society”, Warsaw, Poland, May 28–29, 2015, conference organized by the Open Science Platform — an initiative of the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw. pon.edu.pl @OpenSciPlatform #ORD2015
The emerging field of computational social science (CSS) is devoted to the pursuit of interdisciplinary social science research from an information processing perspective, through the medium of advanced computing and information technologies.
Researching Social Media – Big Data and Social Media AnalysisFarida Vis
Researching Social Media – Big Data and Social Media Analysis, presentation for the Social Media for Researchers: A Sheffield Universities Social Media Symposium, 23 September 2014
Presentation to the second LIS DREaM workshop held at the British Library on Monday 30th January 2012.
More information available at: http://lisresearch.org/dream-project/dream-event-3-workshop-monday-30-january-2012/
Disinformation challenges tools and techniques to deal or live with itnsarris
Keynote presentation at 1st International Workshop on
Disinformation and Toxic Content Analysis
(DiTox 2023) on the problem of onine disinformation and associated technnologies and policies that help against it. This work was co-funded by the EC in the context of the MedDMO project (contract number 101083756)
Twitter analytics: some thoughts on sampling, tools, data, ethics and user re...Farida Vis
Keynote delivered at the SRA Social Media in Social Research conference, London, 24 June, 2013. The presentation highlights some thoughts on sampling, tools, data, ethics and user requirements for Twitter analytics, including an overview of a series of recent tools.
SHORTer VERSION - Liminality and Communitas in Social Media - The case of Twi...Jana Herwig
A longer version, optimized for the lack of verbal input, can be found here: http://www.slideshare.net/anaj/liminality-and-communitas-in-social-media-the-case-of-twitter
Outreach Through Social Media | Ocean Sciences 2014Christie Wilcox
My presentation at Ocean Sciences 2014 in Honolulu, HI on how scientists can use social media for outreach and professional development. The internet is yours! #OSMSocial #2014OSM
Disseminating Scientific Papers via Twitter: Practical Insights and Research ...SC CTSI at USC and CHLA
About one-fifth of current scientific papers are being shared on Twitter. With 230 million active users and 24 percent of the U.S. online population using the microblogging platform, hopes are high that tweets mentioning scientific articles reflect some type of interest by the general public and might even be able to measure the societal impact of research. However, early studies show that most of the engagement with scientific papers on Twitter takes place among members of academia and thus reflects visibility within the scientific community rather than impact on society. At the same time, some tweets do not involve any human engagement but rather are generated automatically by Twitter bots.
This talk focuses on identifying audiences on Twitter and teaches participants how to collect, analyze, visualize, and interpret diffusion patterns of scientific articles on Twitter. The course provides an overview of Altmetrics research and present the challenges – including methods and first results – of classifying Twitter user groups, with a particular focus on identifying members of the general public and measuring societal impact. The course will provide hands-on exercises and instructions on how to analyze by whom, when, and how scientific papers are shared on Twitter.
Speaker: Stefanie Haustein, Ph.D., Assistant Professor, School of Information Studies, University of Ottawa
Slides from a practical workshop on gathering customer insights from social media using Social Network Analysis (SNA) with NodeXL and Twitter. SNA allows you to gain insight from thousands of tweets and messages on a range of topics for marketing research or academic use. NodeXL reports can be used for measuring and monitoring an organisation’s own performance as well as a competitors´ performance. At the highest level, a SNA approach allows social media managers to recognize what their audience looks like.
Spotle AI-thon Top 10 Showcase - Analysing Mental Health Of India - Cyber Pun...Spotle.ai
Spotle AI-thon - The AI Global Challenge had 7000+ participants from best campuses in India, Singapore worked on addressing the mental health challenge with AI. Top 10 teams from IIT Roorkee, CMI, NIT, IIM Indore, Charotar University, DIAT made it to the final round. This is a showcase Top 10 presentation from Team Cyber Punk, Charotar University (Prince Makwana, Pritul Dave, Kushal Master)
Payday on the Social Semantic Web: life would be better if we would embed a fair donation system (similarly to Flattr) into the Web. Thanks to bar codes, also those people can receive donations that do not have Internet access but happen to appear in a YouTube video (or other media)...
Details: http://iswc2011.semanticweb.org/fileadmin/iswc/Papers/outrageous/iswc2011outrageousid_submission_8.pdf
Talk given at the Semantic Web SIKS course 2011: why we need semantics on the Social Web. Three examples: social tagging, user profiling based on Twitter streams and cross-system user profiling (linking user profiles).
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Learning Semantic Relationships between Entities in Twitter
1. Learning Semantic Relationships between Entities in Twitter ICWE, Cyprus, June 22, 2011 IlknurCelik, Fabian Abel, Geert-Jan Houben Web Information Systems, TU Delft
2. What we do: Science and Engineering for the Personal Web domains: news social mediacultural heritage public datae-learning Personalized Recommendations Personalized Search Adaptive Systems Analysis and User Modeling Semantic Enrichment, Linkage and Alignment user/usage data Social Web
11. Page 60!! Music Artist Next Saturday @thatsimpsonguyaka Guilty Simpson will be performing at Area51 in my hometwonEindhoven. #realliveshit #iwillspinrecords about 9 hours ago via Blackberry tweet I was looking for Locations
12. Is there an easier way?Faceted Search can help Current Query: Expand Query: Results: Yskiddd: Next saturday@thatsimpsonguy aka Guilty Simpson will be performing at Area51 in my homeytown Eindhoven. #realliveshit#iwillspinrecords2 Usee123: Cool #EV3door7980 !!! http://bit.ly/igyyRhL sanmiquelmusic: This Saturday I'm joining @KrusadersMusic to Intents Eindhoven Music Locations more... Events more... Music Artists: + Guilty Simpson + Bryan Adams + Elton John + Golden Earring + Rihanna + The eagles + 3 Doors Down more...
13. Location: Eindhoven Music Artist: Guilty Simpson Location: Area51 Semantic relationships between entities are essential to realize such applications.
17. Relation Learning Strategies entities time period Relation: relation(e1, e2, type, tstart, tend, weight) RelationLearningstrategy: Input:entity e1 and e2, time period (tstart, tend) Challenge:inferweightand type of the relationfor the given Weightingaccording to co-occurrence frequency: Tweet-based: count co-occurrence in tweets News-based: count co-occurrence in news Tweet-News-based: count co-occurrence in both tweets and news type/label of relation relatedness
18. Research Questions Which strategy performs best in detecting relationships between entities? Does the accuracy depend on the type of entities which are involved in a relation? How do the strategies perform for discovering relationships which have temporal constraints (trending relationships)?
19. Dataset more than: 20,000 Twitter users 2 months 10,000,000 WikiLeaks founder, Julian Assange, under arrest in London tweets 75,000 news time Dec 15 Jan 15 Nov 15
21. Tweets and news articles per day 50,000-400,000 tweets per day 100-1000 news articles per day
22. Entities referenced per day 10,000-100,000 entity ref. in tweets per day 5,000-20,000 entity ref. in news per day ~40% tweets do not mention any (recognizable) entity 72.6% of the top 1000 mentioned entities in Twitter are also mentioned in the mainstream news media 99.3% of the news articles mention at least one (recognizable) entity
25. Our Ground Truth of true relations Based on DBpedia: We mapped entities to their corresponding DBpedia resources No appropriate DBpedia URIs for more than 35% of the entities We analyzed whether there is a direct relation between two entities Based on user study: Participants judged whether two entities are really: related (62.6% were rated as related) related in the given time period (57.3% were rated as related) Overall: 2588 judgments Thank you!
26. 1. Which strategy performs best in detecting relationships between entities?
27. Accuracy of relation discovery Combining both tweet-based and news-based strategies allows for highest accuracy Based on user study Based on DBpedia
29. 2. Does the accuracy depend on the type of entities which are involved in a relation?
30. Does the accuracy depend on the type of entities? 87% precision 92% Relationships which involve events can be discovered with high precision 26% precision 23%
31. Does the accuracy depend on the type of entities? (cont.) Relationships between events can be detected with highest precision. Relationships between persons/groups are difficult to detect.
32. 3. How do the strategies perform for discovering relationships which have temporal constraints?
33. Relationships with temporal constraints Tweet-based strategy performs better in discovering relationships that are valid only for a specific period in time
34. Where do relationships emerge faster? Speed of strategies is domain-dependent time difference (in days) of first occurrence of relationship News is faster Twitter is faster
35. Conclusions and Future Work What we did: relation discovery framework based on Twitter Findings: Strategy that considers both tweets and (linked) news articles allows for highest accuracy Performance varies for different domains (e.g. event-relationships can be detected with highest precision) Tweet-based strategy allows for detecting relationships, which have a restricted temporal validity, with high precision (and fast) Ongoing work: Adaptive Faceted Search on Twitter http://wis.ewi.tudelft.nl/tweetum/
36. Relation Discovery for Adaptive Faceted Search Current Query: 2. Analyze (temporal)relationships of entities that appear in the user profile to adapt facet ranking. Expand Query: Results: Yskiddd: Next saturday@thatsimpsonguy aka Guilty Simpson will be performing at Area51 in my homeytown Eindhoven. #realliveshit#iwillspinrecords2 Usee123: Cool #EV3door7980 !!! http://bit.ly/igyyRhL sanmiquelmusic: This Saturday I'm joining @KrusadersMusic to Intents Eindhoven Music Locations more... Events more... Music Artists: + Guilty Simpson + Bryan Adams + Elton John + Golden Earring + Rihanna + The eagles + 3 Doors Down more... 1. Analyze (temporal)relationships of entities of the “current query” to adapt facet ranking. user
38. The Social Web Help me to tackle the information overload! Who is this? What are his personal demands? How can we make him happy? Recommend me news articles that now interest me! Help me to find interesting (social) media! Give me personalized support when I do my online training! Personalize my Web experience! Do not bother me with advertisements that are not interesting for me!
entity extraction and semantic enrichment and relation discovery.
large dataset of more than 10 million tweets and 70,000 news articles
100-1000 news articles per day50,000 and 400,000 tweets per dayTwo of the minima were caused by temporary unavailability of the Twitter monitoring service.
approximately 10,000-100,000 entity references per day for tweetsapproximately 5,000-20,000 entity references per day for News~40% of the tweets had no entities99.3% of the news articles had at least one entityoverlap of entities: 72.6% of the top 1000 mentioned entities in Twitter are also mentioned in the news media.
39 different entity typesPersons, locations and organizations were mentioned most often, followed by movies, music albums, sport events and political eventswe analyzed specific types of relations such as relationships between persons and locations or organizations and events in detail
Person/Group-Event relationships cover relations between persons and political events, persons and sport events, organizations and sport events, etcinteresting to see that the Tweet+News-based strategy discovers relationships, between persons/groups and events with higher precision 0.92 and 0.87 regarding P@10 and P@20 than people's relations to products (0.23 and 0.26) or locations (0.73 and 0.6).
relations between entities that are of the same typerelationships between two events can also be discovered with high precision, followed by relations between locations...
Twitter is more appropriate for inferring relationships, which have temporal constraints, than the news media.Tweet-based strategy improves precision (P@5) by 22.7%
relationships between persons and movies or music albums emerge much faster (14.7 and 5.1 days respectively) in Twitter than in the traditional news media.
Our framework extracts typed entities from enriched tweets/news and provides strategies for detecting semantic (trending) relationships between entities. We:investigated the precision and recall of the relation detection strategies,analyzed how the strategies perform for each type of relationships andWhich strategy performs best in detecting relationships between entities?Does the accuracy depend on the type of entities which are involved in a relation?How do the strategies perform for discovering relationships which have temporal constraints, and how fast can the strategies detect (trending) relationships?evaluated the quality and speed for discovering trending relationships that possibly have a limited temporal validity.