This tutorial presents tools and techniques for effectively utilizing the Internet of Things (IoT) for building advanced applications, including the Physical-Cyber-Social (PCS) systems. The issues and challenges related to IoT, semantic data modelling, annotation, knowledge representation (e.g. modelling for constrained environments, complexity issues and time/location dependency of data), integration, analy- sis, and reasoning will be discussed. The tutorial will de- scribe recent developments on creating annotation models and semantic description frameworks for IoT data (e.g. such as W3C Semantic Sensor Network ontology). A review of enabling technologies and common scenarios for IoT applications from the data and knowledge engineering point of view will be discussed. Information processing, reasoning, and knowledge extraction, along with existing solutions re- lated to these topics will be presented. The tutorial summarizes state-of-the-art research and developments on PCS systems, IoT related ontology development, linked data, do- main knowledge integration and management, querying large- scale IoT data, and AI applications for automated knowledge extraction from real world data.
Related: Semantic Sensor Web: http://knoesis.org/projects/ssw
Physical-Cyber-Social Computing: http://wiki.knoesis.org/index.php/PCS
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Uma visão geral sobre Reality Mining e pesquisas que foram e estão sendo desenvolvidas neste contexto. O conteúdo dos slides foram extraídos dos estudos e experimentos do MIT Media Lab (http://hd.media.mit.edu/) dirigido pelo Prof. Alex Pentland
This is a brief a brief review of current multi-disciplinary and collaborative projects at Kno.e.sis led by Prof. Amit Sheth. They cover research in big social data, IoT, semantic web, semantic sensor web, health informatics, personalized digital health, social data for social good, smart city, crisis informatics, digital data for material genome initiative, etc. Dec 2015 edition.
The Internet of Things, or the IoT is a vision for a ubiquitous society wherein people and “Things” are connected in an immersively networked computing environment, with the connected “Things” providing utility to people/enterprises and their digital shadows, through intelligent social and commercial services. However, translating this idea to a conceivable reality is a work in progress for close to two decades; mostly, due to assumptions favoured more towards a “Things”-centric rather than a “Human”-centric approach coupled with the evolution/deployment ecosystem of IoT technologies.
Estimates on the spread and economic impact of IoT over the next few years are in the neighborhood of 50 billion or more connected “Things” with a market exceeding $350 billion through smarter cities and infrastructure, intelligent appliances, and healthier lifestyles. While many of these potential benefits from IoT are real and achievable, the road to accomplish these may need an rethink.
In the last few years, there has been a realization that an effective architecture for IoT (particularly, for emerging nations with limited technology penetration at the national scale) that is both affordable and sustainable should be based on tangible technology advances in the present, ubiquitous capabilities of the present/future, and practical application scenarios of social and entrepreneurial value. Hence, there is a revitalized interest to rethink the above assumptions, and this exercise has led to a more plausible set of scenarios wherein humans along with data, communication and devices play key roles.
In this presentation, an attempt is made to disaggregate these core problems; and offer a trajectory with a set of design paradigms for a renewed IoT ecosystem.
Engines of Order. Social Media and the Rise of Algorithmic Knowing.Bernhard Rieder
Talk given at the Social Media and the Transformation of Public Space Conference on June 19 at the University of Amsterdam. References and comments are in the notes section.
SP1: Exploratory Network Analysis with GephiJohn Breslin
ICWSM 2011 Tutorial
Sebastien Heymann and Julian Bilcke
Gephi is an interactive visualization and exploration software for all kinds of networks and relational data: online social networks, emails, communication and financial networks, but also semantic networks, inter-organizational networks and more. Designed to make data navigation and manipulation easy, it aims to fulfill the complete chain from data importing to aesthetics refinements and interaction. Users interact with the visualization and manipulate structures, shapes and colors to reveal hidden properties. The goal is to help data analysts to make hypotheses, intuitively discover patterns or errors in large data collections.
In this tutorial we will provide a hands-on demonstration of the essential functionalities of Gephi, based on a real case scenario: the exploration of student networks from the "Facebook100" dataset (Social Structure of Facebook Networks, Amanda L. Traud et al, 2011). The participants will be guided step by step through the complete chain of representation, manipulation, layout, analysis and aesthetics refinements. Particular focus will be put on filters and metrics for the creation of their first visualizations. They will be incited to compare the hypotheses suggested by their own exploration to the results actually published in the academic paper afterwards. They finally will walk away with the practical knowledge enabling them to use Gephi for their own projects. The tutorial is intended for professionals, researchers and graduates who wish to learn how playing during a network exploration can speed up their studies.
Sébastien Heymann is a Ph.D. Candidate in Computer Science at Université Pierre et Marie Curie, France. His research at the ComplexNetworks team focuses on the dynamics of realworld networks. He leads the Gephi project since 2008, and is the administrator of the Gephi Consortium.
Julian Bilcke is a Software Engineer at ISC-PIF (Complex Systems Institute of Paris, France). He is a founder and a developer for the Gephi project since 2008.
Big Data Social Network Analysis (BDSNA) is the focal computational and graphical
study of powerful techniques that can be used to identify clusters, patterns, hidden
structures, generate business intelligence, in social relationships within social networks
in terms of network theory. Social Network Analysis (SNA) has a diversified set of
applications and research areas such as Health care, Travel and Tourism, Defence and
Security, Internet of Things (IoT) etc. . . With the boom of the internet, Web 2.0
and handheld devices, there is an explosive growth in size, complexity and variety in
unstructured data, thus the analysis and information extraction is of great value and
adaptation of Big Data concept to SNA is vital.
This literature survey aims to investigate the usefulness of SNA in the “Big Data
(BD)” arena. This survey report reviews major research studies that have proposed
business strategies, BD approaches to generate predictive models by gratifying contemporary
challenges that have arises from SNA.
This tutorial presents tools and techniques for effectively utilizing the Internet of Things (IoT) for building advanced applications, including the Physical-Cyber-Social (PCS) systems. The issues and challenges related to IoT, semantic data modelling, annotation, knowledge representation (e.g. modelling for constrained environments, complexity issues and time/location dependency of data), integration, analy- sis, and reasoning will be discussed. The tutorial will de- scribe recent developments on creating annotation models and semantic description frameworks for IoT data (e.g. such as W3C Semantic Sensor Network ontology). A review of enabling technologies and common scenarios for IoT applications from the data and knowledge engineering point of view will be discussed. Information processing, reasoning, and knowledge extraction, along with existing solutions re- lated to these topics will be presented. The tutorial summarizes state-of-the-art research and developments on PCS systems, IoT related ontology development, linked data, do- main knowledge integration and management, querying large- scale IoT data, and AI applications for automated knowledge extraction from real world data.
Related: Semantic Sensor Web: http://knoesis.org/projects/ssw
Physical-Cyber-Social Computing: http://wiki.knoesis.org/index.php/PCS
Smart Data - How you and I will exploit Big Data for personalized digital hea...Amit Sheth
Amit Sheth's keynote at IEEE BigData 2014, Oct 29, 2014.
Abstract from:
http://cci.drexel.edu/bigdata/bigdata2014/keynotespeech.htm
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is personalized digital health that related to taking better decisions about our health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (e.g., information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will describe Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If my child is an asthma patient, for all the data relevant to my child with the four V-challenges, what I care about is simply, “How is her current health, and what are the risk of having an asthma attack in her current situation (now and today), especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP. I will motivate the need for a synergistic combination of techniques similar to the close interworking of the top brain and the bottom brain in the cognitive models.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city.
Uma visão geral sobre Reality Mining e pesquisas que foram e estão sendo desenvolvidas neste contexto. O conteúdo dos slides foram extraídos dos estudos e experimentos do MIT Media Lab (http://hd.media.mit.edu/) dirigido pelo Prof. Alex Pentland
This is a brief a brief review of current multi-disciplinary and collaborative projects at Kno.e.sis led by Prof. Amit Sheth. They cover research in big social data, IoT, semantic web, semantic sensor web, health informatics, personalized digital health, social data for social good, smart city, crisis informatics, digital data for material genome initiative, etc. Dec 2015 edition.
The Internet of Things, or the IoT is a vision for a ubiquitous society wherein people and “Things” are connected in an immersively networked computing environment, with the connected “Things” providing utility to people/enterprises and their digital shadows, through intelligent social and commercial services. However, translating this idea to a conceivable reality is a work in progress for close to two decades; mostly, due to assumptions favoured more towards a “Things”-centric rather than a “Human”-centric approach coupled with the evolution/deployment ecosystem of IoT technologies.
Estimates on the spread and economic impact of IoT over the next few years are in the neighborhood of 50 billion or more connected “Things” with a market exceeding $350 billion through smarter cities and infrastructure, intelligent appliances, and healthier lifestyles. While many of these potential benefits from IoT are real and achievable, the road to accomplish these may need an rethink.
In the last few years, there has been a realization that an effective architecture for IoT (particularly, for emerging nations with limited technology penetration at the national scale) that is both affordable and sustainable should be based on tangible technology advances in the present, ubiquitous capabilities of the present/future, and practical application scenarios of social and entrepreneurial value. Hence, there is a revitalized interest to rethink the above assumptions, and this exercise has led to a more plausible set of scenarios wherein humans along with data, communication and devices play key roles.
In this presentation, an attempt is made to disaggregate these core problems; and offer a trajectory with a set of design paradigms for a renewed IoT ecosystem.
Engines of Order. Social Media and the Rise of Algorithmic Knowing.Bernhard Rieder
Talk given at the Social Media and the Transformation of Public Space Conference on June 19 at the University of Amsterdam. References and comments are in the notes section.
SP1: Exploratory Network Analysis with GephiJohn Breslin
ICWSM 2011 Tutorial
Sebastien Heymann and Julian Bilcke
Gephi is an interactive visualization and exploration software for all kinds of networks and relational data: online social networks, emails, communication and financial networks, but also semantic networks, inter-organizational networks and more. Designed to make data navigation and manipulation easy, it aims to fulfill the complete chain from data importing to aesthetics refinements and interaction. Users interact with the visualization and manipulate structures, shapes and colors to reveal hidden properties. The goal is to help data analysts to make hypotheses, intuitively discover patterns or errors in large data collections.
In this tutorial we will provide a hands-on demonstration of the essential functionalities of Gephi, based on a real case scenario: the exploration of student networks from the "Facebook100" dataset (Social Structure of Facebook Networks, Amanda L. Traud et al, 2011). The participants will be guided step by step through the complete chain of representation, manipulation, layout, analysis and aesthetics refinements. Particular focus will be put on filters and metrics for the creation of their first visualizations. They will be incited to compare the hypotheses suggested by their own exploration to the results actually published in the academic paper afterwards. They finally will walk away with the practical knowledge enabling them to use Gephi for their own projects. The tutorial is intended for professionals, researchers and graduates who wish to learn how playing during a network exploration can speed up their studies.
Sébastien Heymann is a Ph.D. Candidate in Computer Science at Université Pierre et Marie Curie, France. His research at the ComplexNetworks team focuses on the dynamics of realworld networks. He leads the Gephi project since 2008, and is the administrator of the Gephi Consortium.
Julian Bilcke is a Software Engineer at ISC-PIF (Complex Systems Institute of Paris, France). He is a founder and a developer for the Gephi project since 2008.
Big Data Social Network Analysis (BDSNA) is the focal computational and graphical
study of powerful techniques that can be used to identify clusters, patterns, hidden
structures, generate business intelligence, in social relationships within social networks
in terms of network theory. Social Network Analysis (SNA) has a diversified set of
applications and research areas such as Health care, Travel and Tourism, Defence and
Security, Internet of Things (IoT) etc. . . With the boom of the internet, Web 2.0
and handheld devices, there is an explosive growth in size, complexity and variety in
unstructured data, thus the analysis and information extraction is of great value and
adaptation of Big Data concept to SNA is vital.
This literature survey aims to investigate the usefulness of SNA in the “Big Data
(BD)” arena. This survey report reviews major research studies that have proposed
business strategies, BD approaches to generate predictive models by gratifying contemporary
challenges that have arises from SNA.
With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.
Links:
http://dl.acm.org/citation.cfm?id=2729713
http://infolab.usc.edu/DocsDemos/to_TSAS15.pdf
http://dl.acm.org/citation.cfm?doid=2729713
SOTMEU 2011 - OSM Potlatch2 Usability EvaluationPatrick Weber
This paper presents one of the first systematic investigations into the usability of Volunteered Geographic Information (VGI) editor front-ends, using established best practice in Human Computer Interaction (HCI) research. The two front-ends evaluated are Potlatch 2 and Google Map Maker, to present contrasting views of the user experience of two major VGI projects. Two user groups with no prior experience of VGI contribution were instructed to enrol and contribute data to both VGI projects, and their interaction with the two services were monitored using a mobile eye tracker and video screen capture software in a computer lab environment. The resulting data was analysed to reveal how users interact and experience VGI editors, as well as highlight deficiencies and differences between Potlatch 2 and Google Map Maker. The results from this research project are a set of recommendations for the future development of these editors, specifically relating to improving the user experience and ease of use of VGI editors.
Towards Collaboration Translucence: Giving Meaning to Multimodal Group DataSimon Buckingham Shum
Vanessa Echeverria, Roberto Martinez-Maldonado, and Simon Buck- ingham Shum.. 2019. Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data. In Proceedings of ACM CHI conference (CHI’19). ACM, New York, NY, USA, Paper 39, 16 pages. https://doi.org/10.1145/3290605.3300269
Collocated, face-to-face teamwork remains a pervasive mode of working, which is hard to replicate online. Team members’ embodied, multimodal interaction with each other and artefacts has been studied by researchers, but due to its complexity, has remained opaque to automated analysis. However, the ready availability of sensors makes it increasingly affordable to instrument work spaces to study teamwork and groupwork. The possibility of visualising key aspects of a collaboration has huge potential for both academic and professional learning, but a frontline challenge is the enrichment of quantitative data streams with the qualitative insights needed to make sense of them. In response, we introduce the concept of collaboration translucence, an approach to make visible selected features of group activity. This is grounded both theoretically (in the physical, epistemic, social and affective dimensions of group activity), and contextually (using domain-specific concepts). We illustrate the approach from the automated analysis of healthcare simulations to train nurses, generating four visual proxies that fuse multimodal data into higher order patterns.
Dobson presentation nys geo summit for slidesharemdob
Presentation by Dr. Mike Dobson of TeleMapics LLC on Crowdsourcing and Map Compilation. This invited presentation was delivered at the New York Geospatial Summit on June 16, 2011 in Skaneateles, New York.
Crowdsourcing Approaches for Smart City Open Data ManagementEdward Curry
A wide-scale bottom-up approach to the creation and management of open data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. This talk explores how to involving a wide community of users in collaborative management of open data activities within a Smart City. The talk discusses how crowdsourcing techniques can be applied within a Smart City context using crowdsourcing and human computation platforms such as Amazon Mechanical Turk, Mobile Works, and Crowd Flower.
Track 13. Uncertainty in Digital Humanities
Author: Amelie Dorn, Eveline Wandl-Vogt, Thomas Palfinger, Jose Luis Preza Diaz, Barbara Piringer, Alexander Schatek and Rainer Zoubek
Knowledge graph use cases in natural language generationElena Simperl
Keynote talk at INLG (International Natural Language Generation Conference) & SIGDial (Special Interest Group on Discourse and Dialogue), September 2023
Beyond monetary incentives: experiments with paid microtasksElena Simperl
Experiments using gamification, social incentives and contests in the context of paid microtask crowdsourcing, presentation at Data Science with Human in the Loop in Amsterdam, 09/2017
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
The human face of AI: how collective and augmented intelligence can help solve societal problems
1. THE HUMAN FACE OF AI:
HOW COLLECTIVE AND
AUGMENTED INTELLIGENCE
CAN HELP SOLVE SOCIETAL
PROBLEMS
Elena Simperl
ACM-W UK, June 2020
@esimperl
2. AUGMENTED
INTELLIGENCE
Human-centred design paradigm for systems
that utilise artificial intelligence (AI)
People and AI work together to enhance
cognitive performance, support decision
making and create new experiences
3. AI DEPENDS
ON PEOPLE
Applications require more or better data e.g.
from mobile or IoT devices
Machine learning algorithms learn from
human labellers
Knowledge-based AI approaches acquire
domain knowledge from people
4. AI BENEFITS
FROM
COLLECTIVE
INTELLIGENCE
Collective intelligence (CI) emerges when
groups or communities come together,
implicitly or explicitly, to achieve a common
goal
CI techniques help AI applications design and
manage interactions with people
In human computation a machine performs a
function by outsourcing some steps to people
5. How do we design
systems that bring
together human,
collective &
computational
intelligence
6. IN THIS TALK
Design patterns for socio-
technical systems
Socio-technical challenges
when defining and applying
the patterns
Directions for future research
7. EXAMPLE:
SUPPORTING DISASTER RELIEF
Human computation has provided huge advances to
disaster relief efforts
40,000 independent reports mapped through Ushahidi after
Haiti earthquake
Crisis teams sift through large volumes of
crowdsourced reports from social media and other
sources
Volunteer efforts predominantly limited to initial
phase of recovery
Human interest and effort often fails before later
stages of process
9. TASK ALLOCATION
EXPERIMENT
Increase learning and engagement by ordering
tasks by difficulty or similar content
Public dataset of tweet URLs about hurricanes
Harvey, Irma and Maria, curated manually to 2000
tweets, 1000 text-only, 1000 with images
People were asked to classify tweets to help
recovery teams process social media reports
Recruitment via Amazon’s Mechanical Turk
Labels train machine learning classifiers
10. TASK DESIGN
Presented participants with disaster relief tweets
(text or text + image)
Participants asked to:
Classify text based on content
Rate task according to difficulty
Three conditions:
Random baseline
Difficult tweets
Easy tweets
Monitored accuracy of responses
11. FINDINGS
Accuracy influenced by difficulty
Text: Weak association when comparing easy and difficult
clusters
Images: Strong association when comparing difficult and random
clusters
No significant association between difficulty and
volume of completed tasks
Only 30% of workers completed more than one task
12. FEEDBACK EXPERIMENT
Two forms of feedback
Expert feedback (using gold standard)
Crowd feedback (randomly selected)
Workflow:
Participant gives answer
Prompted with pre-existing answer and offered chance to
edit
Asked to explain decision
Monitored decisions and justifications
13. FINDINGS
Participants generally poor at taking feedback into
account:
57% of workers felt expert feedback matched their responses
(only 7% did)
36% of workers felt crowd feedback matched (only 4% did)
Participants presented with crowd feedback more
likely to change their answer in response (41% vs
26%)
Also more likely to deem presented feedback from
crowd as incorrect than from experts (22% vs 16%)
14. FUTURE WORK
Difficulty impacts accuracy, but not
engagement
Participants struggled with more
complex tasks
Significant support required for
maximum accuracy
Generic feedback not sufficient –
more personalised support required,
resource-intensive
15. EXAMPLE:
URBAN AUDITING
ON DEMAND
Urban datasets are often out-of-
date
Survey methodologies: expensive,
error-prone, no validation
VGI (e.g. OpenStreetMap): no
control over data updates,
coverage etc.
Online tool using paid microtask
crowdsourcing
Uses digital street view imagery
Task performed remotely
Participants recruited from online
marketplaces
16. VIRTUAL CITY EXPLORER
QROWD-POI.HEROKUAPP.COM/
Urban planner defines an area
and the instructions for the
participants
Participants explore an area
virtually and identify points of
interest
Urban planner monitors task
execution, quality and rewards
18. EXPERIMENT: CYCLING TRENTO & NANTES
150 participants per city, random starting positions
5 PoIs (bike racks) per participant for $0.15
Total cost per city: $45 (7 days)
Mixed methods approach, including metrics and
manual inspection
RQ1: Feasibility and precision as task progresses
RQ2: Completeness (overlap with benchmark datasets)
RQ3: Coverage (percentage of visited nodes on explorable path)
RQ4: Crowd experience (interface errors triggered, number of
escapes)
Trento Nantes
Area 0.347km2 0.336km2
Nodes 906 1177
Explorable
distance
9127m 12104m
StreetView
coverage
93% 92%
19. RQ1: TASK FEASIBILITY AND PRECISION AS TASK PROGRESSES
UX supports discovery
of PoIs
Photoshoot paradigm
and triangulation
method help identify
low-quality answers
Precision drops as all
PoIs are submitted
20. RQ2: DATA COMPLETENESS
Approach complements existing
data sources and is able to find
new PoIs
Highly customisable (area of
interest, budget, questions, timing)
52
54
21. RQ3: COVERAGE OF THE
DESIGNATED AREA
Approach achieves high coverage of the
area of interest
Some parts of the map are visited more
often than others (resources)
Black dots are points on StreetView that
are difficult to explore
22. RQ4: CROWD EXPERIENCE
Most participants were able to complete
their tasks without any incidents. Some did
not manage to triangulate or stepped
outside of the designated area
Positive feedback, fair payment, despite
taboo mechanism. Small percentage submit
some data and dropped out.
Most participants who dropped out did not
seem to try to complete the task
23. FINDINGS
VCE adds value to urban auditing methods
Accuracy comparable to OpenStreetMap, easier to
manage than VGI
Additional resources upon demand (at a cost)
Free exploration achieves good coverage
Taboo mechanism helps reduce costs and avoid
duplicated work
24. FUTURE WORK
Allocating starting positions: randomly, centre, to confirm item, to cover new
area etc.
Coordinating among participants: map showing progress of other participants
Understanding the impact of urban topology on feasibility, accuracy,
coverage
Direct comparisons with other approaches
Hybrid workflows with crowds on the ground and online
25. EXAMPLE:
UNDERSTANDING MOBILITY
PATTERNS
City planners lack detailed mobility
information about their residents
Human-AI workflow
Bespoke app for data collection
Combination of symbolic and numerical ML
classifiers to match trip segments to modes of
transport
Active learning approach to ask travellers to
validate trips the machine is unsure about
27. CHALLENGE:
ASSESSING THE QUALITY OF THE DATA
Naïve model assumes people will notice and correct errors in journeys detected by the
algorithm
Is this true? If not, can we detect errors and estimate residual error rate?
Are people employing specific ‘strategies’ to check and correct journeys?
28. EXPERIMENT DESIGN
No independent ground truth!
Inject artificial errors and measure if
they are corrected
Assume artificial errors are not
accidental corrections
Use ratio of discovered natural
errors to discovered artificial errors
to estimate initial and residual
natural error rate
Assume natural errors are comparable
to artificial ones and people are not
adding new errors (‘mis-corrections’)
29. EXPERIMENT DESIGN (2)
10 participants, ~5 journeys per participant, from Google Timelines (KML)
Pre-process to add artificial errors in four classes:
Under- or over-segmentation
Bad mode
Bad point (100/400m GPS point move)
Score. Manual process, tool supported
30. PRELIMINARY FINDINGS:
MORE RESEARCH NEEDED INTO DATA COLLECTION
METHODOLOGIES FOR ML
Errors can be corrected
Errors can mislead
Errors can persist
A range of complex cases
31. How do we design systems that
bring together human, collective &
computational intelligence
32. Mix of CI approaches
Iterative UX design
Methods to assess data quality and
improve human-AI interactions
Aligned motivation and incentives
33. THANKS TO LUIS-DANIEL IBÁÑEZ, EDDY
MADDALENA, RICHARD GOMER, NEAL REEVES,
THE QROWD PROJECT, NESTA AND THE
EUROPEAN COMMISSION
@esimperl
Maddalena, E., Ibáñez, L.D. and Simperl, E., 2020. On the mapping of
Points of Interest through StreetView imagery and paid crowdsourcing. To
appear in ACM TIST.
qrowd-poi.herokuapp.com
Nesta, June 2020. Combining Crowds and Machines: Experiments in
collective intelligence design 1.0. nesta.org.uk/report/combining-crowds-
and-machines/
Nesta, June 2020. Collective intelligence grants 1.0.
nesta.org.uk/feature/collective-intelligence-grants/