2. Overview
• Web Services Portal
– Crawls for and indexes Web Services on the Web
– Currently more than 28,500 indexed and monitored
• Problems
– Found services are not annotated
– For many, descriptions are not available
– Limited search results and possibilities
– Web APIs need to be confirmed by users
• Target
– Obtain more annotations by involving users in the annotation process
– Validate existing annotations, if any
– „Catch Them & Keep Them“
4/14/11 www.insemtives.eu 2
3. Design Decisions
• Different annotation methods exist
– Keywords/tags
– Categories
– Natural language descriptions
– Lightweight/fully-fledged semantic web service descriptions (e.g.
WSMO/Light, OWL-S, etc.)
• Annotate the K.I.S. way
– Avoid complicated and demanding annotations (limit to tags,
categories and NL descriptions)
• Use lightweight RDF ontologies in the background (e.g. to
ease the search)
• SWS annotations might be integrated in the future
4/14/11 www.insemtives.eu 3
– Most users are not familiar with SWS
9. OPD Workshop - Objectives
Classical PD:
• User centric, cyclic development approach - users
provide tips and feedback for/on the prototype itself.
• Problem in the seekda case:
•Workshops can not be conducted because users are
potentially distributed all over world.
OPD as solution for distributed target groups:
4/14/11 www.insemtives.eu 9
10. OPD Workshop - Procedure
• In General:
1. Technical committee chooses number of features out of forum
discussions
2. Features open for user voting
3. Feature selection, implementation
4. Collect user feedback
• Duration per cycle: 4-6 weeks
4/14/11 www.insemtives.eu 10
11. OPD Workshop – Cycles
•1st Cycle
• Beginning November: Start of workshop (dashboard, technical committee, introduction)
• 2 weeks later: Identification of 5 most important features/wishes
• 1 week later: Selection of 2 most popular features
• Beginning of Dec: Short tasks for users
• Mid December: End of cycle, feedback analysis
•2nd Cycle
• Execution dates: January – March
• Goals for this cycle
• Increase motivation
• Increase activity of participants
• Focus more on usability/design and incentives
• Changes
• Tasks first
• Split into smaller parts, sequentially
• Explained through screencasts
• Example: go to the portal, search for xyz, identify redundant elements, most important, …
• OPD Dashboard Improvements
4/14/11 www.insemtives.eu 11
12. OPD Workshop - Results
• Numbers
~ 250 votes
~ 160 forum posts
15-20 active users
• User Background
• Web Services experts
• Developers
• Random visitors
• Feedback/Implementation
• 18 suggested features
• 6 concrete features implemented (ongoing)
• Several implemented usability/design improvements
• Conclusions & Next Steps (ongoing)
• Introduce challenge procedures
• Ask specifically about guided processes (wizards)
• Integrate OPD workshop directly from the platform
4/14/11 www.insemtives.eu 12
13. Challenge 1: Amazon
Mechanical Turk
Goal: Initial annotations for new and undescribed APIs
Tasks available
• Confirm document is related to a Web API (yes/no)
• Provide/improve description
• Provide and confirm (bootstrapped) tags/concepts
• Provide and confirm (bootstrapped) service categories
• Rate document quality
Parameters
• Qualification test
• Min. approval rate per worker
• Approx Time needed per task
• Reward
4/14/11 www.insemtives.eu 13
14. Challenge 1: Amazon
Mechanical Turk
Simple Annotation Wizard
Phase 1
1. Setup
• Initial set of 70 Web APIs
• No qualification needed
• Approx. Time needed: 15 minutes
• Reward: 0,10$
• Description + Screencast (Walkthrough)
2. Manual evaluation (seekda)
• Main focus on description and Yes/No question
• Determine whether qualification is needed for workers
• Determine whether wizard is understandable (usability)
• Determine whether review tasks are needed
4/14/11 www.insemtives.eu 14
15. Challenge 1: Amazon
Mechanical Turk
Phase 1, Results
Total: 70 API documents, 23 distinct workers
Initial Question (Document about Web API, Yes/No)
• 49 documents correctly annotated (70%)
• 21 as Yes, this document is about a Web API
• 28 as No, this document is not related to a Web API
Description, Category, Tags
• ~ 15 submissions including all answers (description, category, tags)
• 4 very good and extensive submissions
• 8 complete and usable submissions
4/14/11 www.insemtives.eu 15
16. Challenge 1: Amazon
Mechanical Turk
Phase 1, Results
Top Workers
• 15 Tasks finished (2 very good/complete, 2 complete)
• 10 Tasks finished (1 very good, 2 complete)
• 5 Tasks finished (2 complete)
Average time needed: 2,5 minutes (min: 9 seconds, max: 13 minutes)
Phase 1, Problems
• Spam (10% - 15%)
• Only few added category and descriptions
• Most workers did not add tags
4/14/11 www.insemtives.eu 16
17. Challenge 1: Amazon
Mechanical Turk
Conclusions
• Answers have to be mandatory
E.g. You have to specify/select at least ...
• Estimated time for completion can be decreased
• → Increase reward
• Clearly point out in the description what exactly needs to be achieved
“In order to be approved, your submission needs to contain at least... “
2-3 appropriate tags, A short but meaningful description, etc
4/14/11 www.insemtives.eu 17
18. Challenge 1: Amazon
Mechanical Turk
Phase 2
• New set of 100 Web APIs
• No qualification needed
• Approx. Time needed: 10 minutes (5 minutes less)
• Reward: 0,20$ (+ 0,10$)
• Improved description and explicit goals
• All questions are mandatory
• Multiple workers per Task (majority vote)
Evaluation: Automatic
4/14/11 www.insemtives.eu 18
19. Challenge 1: Amazon
Mechanical Turk
Extensions
• Additionally split Wizard into atomic tasks (e.g. Description)
• Iterative Tasks
• Using an existing description as seed
• Read descriptions, if you think a description is appropriate, click “Vote”
• If not appropriate submit a better description and get a chance to win a
$xx bonus
• Pre-selection of documents that need manual (seekda)
approval
• Spam prevention
4/14/11 www.insemtives.eu 19
20. Challenge 1: Amazon
Mechanical Turk - Benefits
• Evaluation of usability/design
• Large number of confirmed Web APIs
–Feed back to crawler/analysis framework
–Improving the initial index quality
• Large number of categorised/tagged Services/APIs
–Feed back to bootstrapping Service
–Improved search/navigation
• Detailed service descriptions for many Services at once
–Improved user experience
–Improved user experience/satisfaction
–Attract/motivate new visitors to participate
4/14/11 www.insemtives.eu 20
21. Challenge 2: Mashups
• Overall Goal
– Create and add mashup(s) using services / Web APIs listed on the
seekda portal.
– Annotate used Services and APIs.
• Timeline
– Duration: 4 weeks
• Introduction of Task through:
– Step by step guidelines
– Set of rules
– Example walkthrough
• Reward
– Gadget (Samsung Galaxy S)
4/14/11 www.insemtives.eu 21
23. Challenge 2: Mashups – Winner
Selection
• Challenge lasts 4 weeks
• Portal users vote for Mashups (1 week)
• Selection of winner is done by seekda engineers and a group of external
experts
• Criteria for evaluation:
• User voting
• Overall quality
• Originality and usefulness
• Technical details/implementation
• Annotations of used Services/APIs
• Presentation of top 3 submissions
4/14/11 www.insemtives.eu 23
24. Challenge 3: Long-Term
Competition
• Provide annotations – become a top contributor
• Collect Points
• Changes and/or improvements to annotations
• New annotations
• Weighting according to annotation type
• Rank contributors
• Reputation is main award
• Annotation quality will evolve
4/14/11 www.insemtives.eu 24
25. Challenge 3: Annotations
• Allow users to donate money for good annotations
Donated money will be awarded to the top annotators
• The more and better annotations provided,
The higher the reputation
The higher the financial incentive
4/14/11 www.insemtives.eu 25
26. Conclusion
• Devising motivation methods for annotating Web services is
challenging
• Different possibilities were/are being explored through
challenges
– Mechanical Turk
– Mashups Challenge
– Long-Term Competition
• Users were closely kept in the development loop through
OPD
– Ensures that implemented features are usable
– Keeps users engaged in a “community”-like way
4/14/11 www.insemtives.eu 26
Editor's Notes
OPD => cyclic approach along the portal & challenges
Phase 1 and 2 to understand the process and practices that are common to the users and the current version of the tool Therefore: User: interviews, Workshops etc. Tool: Usability lab tests, Expertwalkthroughs
basically the changes in the initial phase were: restructuring of the interface in general, according to interviews results and design recommendations from USI - detailed changes based on the recommendations of users (OPD), which don’t have a strong impact on the design itself but lead to a more expectation conform appearance of the portal introduction of web apis in addition to wsdl services first protoype implementation of Web Services mashups introduction of the OPD workshop I would call it something like an initial, improved prototype based on the first findings from interviews and walkthroughs which serves basically as a base for the OPD workshop
Caption: Phase 4: Prototyping and design analysis Phase 4 basically consisted of three parts: siehe Folie Recommendations in terms of :usablility etc OPD workshop to wrap the whole development process and evaluate changes Challenges to motivate the users to participate
Wizard: Usability Design (Suitable for Learning, Controllability) Attrakdiff Survey: Emotional Design (pragmatic, hedonic attributes of tools) Sociablilty: Forum, Awareness feature for other users ‘ activities
To involve the user and create a more attracting tool, which fits their needs: PD Move the workshop, including users, discussion and decissions into an virtual enviroment. User exlpore the prototyp, write down things they dis-/like and discuss them. Further they vote on a selection of those features.
The formal structure of the procedure is the following: Christoph DEMO
Introducing challenges: Further the OPD Workshop serve as tool to introduce the challenages to users. Those challenges are used to motivate the user to participate. James will present the three different challanges planned by seekda.
Our Scenario Atomic and very simple Tasks Short description for quick understanding (e.g. Screencast) Follow-up Tasks to review ('peer-reviewing' of tasks) Tasks Confirm Web API (yes / no) .... needs qualification? Provide Tags, confirm Bootstrapped Tags Provide Service Category Provide / Improve Description
Extensions and improvements we are trying to formulate/test
Christoph DEMO after done from this slide Step 1: Registration User registers for Challenge (and portal) Username is shown on list of submissions Step 2: Brainstorming User browses and selects Web Services / APIs Comes up with an initial idea Step 3: Implementation User starts with implementation of Mashup Using selected Services and APIs Step 4: Finalizing User clicks on "Finalize” Mashup is added to our index Mashup is added to list of Finalized submissions Step 5: Add Services and APIs (optional) User adds all missing Services and APIs to our portal Step 6: Annotate Services and APIs User annotates all used Services / APIs Provides Description, meaningful Tags, Category