Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Crowdsourcing the notation of domain specific languages

1,125 views

Published on

In the context of Domain-Specific Modeling Language (DSML) development, the involvement of end-users is crucial to assure that the resulting language satisfies their needs.

In our paper presented at SLE 2017 in Vancouver, Canada, on October 24th within the SPLASH Conference context, we discuss how crowdsourcing tasks can exploited to assist in domain-specific language definition processes.

Indeed, crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational tasks. In language design, by relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption.

We propose a systematic (and automatic) method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users' needs. We also report on an experiment validating the approach.

In the paper we also validate the approach and experiment it in practical use cases.

The paper is titled: "Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages" and was co-authored by Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, and Andrea Mauri.

The full paper can be found here: https://dl.acm.org/citation.cfm?doid=3136014.3136033 

Published in: Software
  • Be the first to comment

Crowdsourcing the notation of domain specific languages

  1. 1. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, Andrea Mauri Contact: marco.brambilla@polimi.it @marcobrambi Better call the crowdBetter call the crowd
  2. 2. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Prologue: Context and Motivation
  3. 3. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Context - 1 Domain-Specific Languages (DSLs) should be all about • Domain • Specificity And yet, frequently challenged in terms of acceptance and adoption
  4. 4. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Context - 2 Software and modeling people: we tend to… • Rely on tools • Improve on methods • Focus on performance, quality
  5. 5. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Context - 3 Software and modeling people: we tend to… • Be anti-social • Forget we are actually humans • Reflect this on the research and solutions: not for humans
  6. 6. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
  7. 7. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
  8. 8. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
  9. 9. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Problem Statement Domain-Specific Languages (DSLs) should be all about • Domain • Specificity And yet, frequently challenged in terms of acceptance and adoption • We stick to MDE, testing, SwEng tools only
  10. 10. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Proposed Solution Rely on COGNIFICATION for improving process and quality of MDE practices1 Especially: rely on the human side 1 Jordi Cabot, Robert Clarisò, Marco Brambilla, Sébastien Gerard. Cognifying Model-Driven Software Engineering. GrandMDE (Grand Challenges in Modeling) at STAF 2017.
  11. 11. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Episode 1: Crowdsourcing and Crowd Management
  12. 12. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Call the Crowd Crowdsourcing: The process of building a computation or information collection job by using computers as organizers by organizing the work as several tasks by dealing with dependencies on data and tasks by assigning tasks to humans
  13. 13. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages 9-1-1, what’s your emergency? initial hypothesis Collect human feedback MDSE methods Crowd or Social Platform
  14. 14. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Crowd Management: Design Process A simple task design and deployment process, based on specific data structures Task Specification Task Planning Task Execution & Control • Task Spec: task operations, objects, and performers Dimension Tables • Task Planning: work distribution  Execution Table for task monitoring • Control Specification: task control policies  Control Mart
  15. 15. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Task Design Which are the input objects of the crowd interaction? Which operations should the crowd perform? How should the task be split into micro-tasks assigned to each person? How should a specific object be assigned to each person? How should the results of the micro-tasks be aggregated? Which execution interface should be used?
  16. 16. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Crowd task management problems Task splitting: the input data collection is too complex relative to the cognitive capabilities of users. Task structuring: the query is too complex or too critical to be executed in one shot. Task routing: a query can be distributed according to the values of some attribute of the collection.
  17. 17. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Splitting Strategy Given N objects in the task Which objects should appear in each MicroTask? How many objects in each MicroTask? How often an object should appear in MicroTasks? Which objects cannot appear together? Should objects be presented always in the same order?
  18. 18. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Crowd Management is hard Task Type Object Type AttributeOperation Type * Performer Task Politician * * Object Attr. Output Attr. ⎧ ⎫⎨ Politician: ObjectType LastName: Attribute Photo: Attribute Classify by Political Party: TaskType MODEL TRANSF. (MT1) MODEL TRANSF. (MT2) Classify: OperationType µTask723: µTask J. F. Kennedy: Politician J. F. Kennedy: Politician Classify by Political Party: Task MODELLEVELINSTANCELEVEL KennedyJFK.jpg LastN am e Photo Party N ULL 3 4 6 Op. Parameter Republican Party: OpParameter Democratic Party: OpParameter 2 * * Classify by Political Party: Task Luigi: Performer Mario: Performer George Bush: Politician µTask Performer TaskPolitician * Status 5 * * Mario: Performer George Bush: Politician Barak Obama: Politician <<instance-of>> <<instance-of>> <<instance-of>> 1 O pTypes Param eters Name Platform O bam aBO .jpg N ULL METAMODEL STRUCTURAL MODEL LastName Party Photo Non-spammer * Status StartTs EndTs Status WORKPLAN MODEL Bush GB.jpg NULL Party
  19. 19. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Crowd Control Mart µTObjExecution Performer TaskPolitician StatusStartTsEndTs µTaskID Object Control Performer Control Task Control O bjectID C om pO bjs TaskIDC om pExecs Name Platform Status PID Eval Right Wrong Eval D em R ep Answ er Status LastName Party Photo Party
  20. 20. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Reactive System Guaranteed termination Extensibility Control Production Rules Result Production Rules Execution Modifier Rules EXECUTION OBJECT PERFORMER TASK OBJECT CONTROL PERFORMER CONTROL TASK CONTROL
  21. 21. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Episode 2: Crowdsourcing DSL Notations
  22. 22. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages A (Simplistic) Traditional Language Design Process Different phases Feedback completely separated
  23. 23. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Levels of Crowd Involvement Incremental role
  24. 24. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Our Current Challenge: Level (b) Abstract Syntax Concrete Syntax Concrete Syntax Concrete Syntax Language in use
  25. 25. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Process in Use
  26. 26. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Input (1) the abstract syntax of the language (metamodel) (2) a set of symbols representing the candidate concrete syntax (3) the mapping (4) optional model examples
  27. 27. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Task Definition Principles Set tasks and questions that grant coherency of the language Cover the whole language structure Don’t ask many times the same question Grant support of agreement
  28. 28. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Task Definition Tasks with many correlated questions Majority voting on tasks Expertise and demographics considered Pattern based approach
  29. 29. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Pattern-based Task Design Patterns in the metamodel are used to create Tasks and Questions Tasks: • Question(s) • Alternative notations • Interactive examples
  30. 30. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Generation of Tasks: Patterns
  31. 31. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Episode 3: Use Case and Experiment
  32. 32. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages BPMN Standard Widely adopted Widely understood (20% notation)
  33. 33. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages BPMN Standard Widely adopted Widely understood (20% notation) Widely misinterpreted and underused (80% notation)
  34. 34. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages INPUT 1: A Metamodel (Excerpt)
  35. 35. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages INPUT 2: Challenging the Notation Can we do better than the standard?
  36. 36. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Generation of Tasks
  37. 37. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Generation of Tasks
  38. 38. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Crowd User Interface
  39. 39. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Experimental Setting 85 participants, 10 countries • undergraduate/graduate computer science students • modeling experts • IT professionals Assessed against modeling, language specification and BPMN experience Answer aggregation strategies compared (effectiveness and affordability ): • static majority policy • targeted agreement
  40. 40. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Results: Task and Events
  41. 41. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
  42. 42. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Targeted Agreement minimum number of answers = 10 agreement level = 60% maximum tasks = 50
  43. 43. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Targeted Agreement minimum number of answers = 10 agreement level = 60% maximum tasks = 50
  44. 44. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages And the winner is…
  45. 45. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages And the winner is…
  46. 46. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Is the Solution Good? Ask the crowd! On AMT Compare 1st, 2nd and 3rd alternative notations
  47. 47. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages AMT Results Question: shown two models, which one do you prefer? • 0.05$ per answer • 204 answer Results: • 1st best notation preferred 60% of the times over the 2nd best, and 69% of the times over the 3rd best. • 2nd best preferred 67% of times over the 3rd best
  48. 48. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Epilogue: Past and Future
  49. 49. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Conclusions The past: Best notation actually identified Problems in coherency of the notation Eat your own dogfood: We did MDE for MDE Ugly UI -> crowdsource it? The future: Combination of crowd + AI for MDE?
  50. 50. Brambilla, Cabot, Canovas, Mauri. Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Using Crowdsourcing to Shape the Notation of Domain-Specific Languages Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, Andrea Mauri contact: marco.brambilla@polimi.it @marcobrambi Thanks for Calling!

×