CIVIC ENGAGEMENT: THE PARADOX OF CROWDSOURCING LIES, A paper written by Jean Brice Tetka for The Impacts of Civic Technology Conference (TICTeC2015)

mysociety

This paper was presented by Jean Brice Tetka from Transparency International at The Impacts of Civic Technology Conference, or TICTeC2015 for short. TICTeC2015 was organised by mySociety and helf in London on 25th March 2015. It was the world's first conference dedicated to discussing the impacts of civic tech.

1
CIVIC ENGAGEMENT: THE PARADOX OF CROWDSOURCING LIES
Avoiding untruths in the wisdom of the crowd
This paper is for "The Impacts of Civic Technology Conference 2015", written by Jean Brice Tetka,
jtetka@transparency.org
INTRODUCTION
Open data and crowdsourcing are two of the pillars now integrated into many development projects
around the world. If the open data approach is targeting governments and institutions, crowdsourcing
project are targeting citizen. Crowdsourcing is an online, distributed problem-solving and production
model used largely by online businesses since 2000 (Brabham, 2008a; Howe, 2006a, 2008). The
success of the crowdsourcing model depends on the assumption that online communities have
‘‘collective intelligence’’ (Levy, 1995/1997) or ‘‘crowd wisdom’’ (Surowiecki, 2004) [1]
. Crowd-
reporting is now embedded in the process of NGOs, companies, and governments to ensure that “the
final user is satisfied with the product”.
After a few years of experimentation, it seems that crowdsourcing is not an entirely reliable source of
information, as the public is tempted to give “wrong” answers to questions depending upon their
context, their needs or their understanding of the problem. If we try to put aside the marketing and
hype around crowdsourcing and focus on the reality, the question to be answered is: how can we
interpret information that we receive from the crowd and how can crowdsourcing be an effective tool
of change?
I. INTRODUCTION TO LIE
Lying can be simply defined as “Not telling the truth” [2]
, but a lie is not always the opposite of the
truth. In a crowdsourcing project, when we collect data from the crowd we can see different types of
“lies” from the public.
Error - a lie by mistake. The person believes they are being truthful, but what they are saying is not
true.
Omission – leaving out relevant information. Easier and less risky. It doesn’t involve “inventing” any
stories. It is passive deception and less guilt is involved.
Restructuring - distorting the context. Saying something in sarcasm, changing the characters, or the
altering the scene.
Denial - refusing to acknowledge a truth. The extent of denial can be quite large—they may be lying
only to you just this one time or they may be lying to themselves.
Minimisation - reducing the effects of a mistake, a fault, or a judgment call.
Exaggeration - representing as greater, better, more experienced, more successful.
2
Fabrication - deliberately inventing a false story. [3]
It is understandable to think that humans always have the best intentions and that they do not “lie”
knowingly, especially on issues that affect their lives. However with the difficulty of qualifying the
intent to lie or tell the truth, the information received in the context of, say, an election or in a crisis or
humanitarian emergency context cannot be properly analysed without being linked to the context in
which the author is located or the engagement of the author participating in the initiative.
Consequently, it is important that we first try to determine the context in which this information was
generated and define its level of “truth” before starting a qualitative data analysis.
II. THE CONTEXT OF CROWDSOURCING DATA
When we try to collect information from the crowd, we normally expect concrete results. This is
completely logical because our actions will be based on those results. If we take the case of when we
ask the public to report fraud or violence in an election, do we really know what fraud means to them?
Do we have the same understanding of violence?
“Working with local partners works best” is one of the main messages from a first crop of
comprehensive reviews of conventional social accountability initiatives that seek to engage citizens in
reporting and monitoring functions (IDS 2011) [4]
. However, it does not automatically mean that the
citizens who reported told the “truth”. Let me give two examples to illustrate the challenge. The first
involves a project called “Christmas for Street Children”, which aimed to give them gifts at the festive
period. The second concerns monitoring of the 2013 election in Cameroon.
Christmas For Street Children
In the “Christmas for street children” project, the first step was to identify what they really need as
priority gifts. The key to this was to understand what brought them on to the street. A group of around
30 street children was identified and we undertook some interviews to prioritise their needs and listen
to their stories.
Most of the stories were extremely sad and the top three things that they requested from us were;
- Money: They went on to the street because their parents were poor or they were orphans or
they needed money to return home
- A secure place to sleep: On the street, young children are influenced by older children and
forced into petty crime and thievery
- Medication: They are always sick and need medication (or money) for their treatment
Although these requirements were all highly important, we later discovered that they were not the key
priorities. After crossing checking several sources of information, it turned out that the main priority
was soap! While they might be able to survive the cold, violence and the dangers of the street, they
cannot survive the awful skin diseases that are very difficult to treat.
They interpreted our question “what do you most urgently need?” as an invitation to express all of
their needs. The children gave us information that they thought we could – or what they would like us
to - respond to, not necessarily what they urgently needed. Eventually, we identified the right
information by spending some days with them on the street and talking to people who interact with
them regularly.
3
The risks of this kind of possible misinterpretation is multiplied with increased use of technology,
where there is even less opportunity for engagement, interaction, discussing answers and
contextualisation of the information. While it is clear that we need to understand the context of the
target group to better design our interventions, it is not necessarily clear that they also understand
what we are doing and why we are doing it. Thus, there will be a big gap in mutual understanding and
expectations, with the target’s perception of the initiative strongly influencing their answers. In this
way, the answers are not “neutral” and the responses are not “correct”, but reflect the situation and
expectations as to what will potentially follow.
III. CIVIC ENGAGEMENT IN A CROWDSOURCING PROJECT
Let’s move to election monitoring, a type of civic engagement which is even more focussed on using
the crowd to bring about change; change which affects the whole of society, not just those who are
invited to express their “needs”.
Crowdsourcing has been adopted as a key method for election monitoring. Thousands of
crowdsourcing projects have been launched over the last decade, but only a handful of them have
been successful; by “successful”, we mean collecting large amounts of accurate data which can help
determine whether the election was “free and fair”. Crowdsourcing for civic projects needs to ensure
that citizens are committed to participate in the project and act in line with the aims of the project.
What can happen if there is not adequate engagement of the crowd during an election monitoring
project? I will try to answer this through my second example – election monitoring in Cameroon in
2013 – which, unfortunately, is on the long list of failures.
Election Monitoring In Cameroon 2013
In a country where participation in the political process is too often viewed as a boring and
burdensome task for citizens [5]
, the challenge was not only to observe the election but also to create
enthusiasm among observers. To overcome these challenges we designed a system of simple,
unambiguous questions which would be sent to observers at timed intervals, thereby effectively
“neutralising” the possibility for different subjective interpretations and overcoming lack of
engagement or motivation. There answers would be sent by SMS.
The questions sent to 1,373 verified recipients [5]
were:
 “Did the president of the local commission open the voting office on time, in the presence of
other members of the commission, and observers and voters already present?” and “did the
office open at 8am?” These questions were sent between 8 and 9am.
 “Are the representatives of political parties and observers free to do their work?” and “were
voters that were registered prevented from voting?” These questions were sent between 11am
and 12pm.
 “Were irregularities or disturbances observed”. This was sent between 2 and 3 pm.
 “Did the offices close at 6pm?”, “were the representatives from political parties and observers
allowed into the voting office after the official closure of voting?”, and “are the
representatives from political parties and observers allowed to verify the counting
operations?” were asked between 6 and 7pm.
 Finally, the questions “were there irregularities during the counting operation?” and “did the
publication of results follow electoral law?” were sent between 9 and 10pm.
4
The dataset created from the answers by the observers and third parties had 823 entries in total. This
was drastically lower than expected. We experienced a slow and steady decrease in respondents
texting in their observations until about lunch time, followed by a dramatic fall afterwards. To the last
question “did the publication of results follow electoral law?” we received no answers at all.
Based on those results, it was impossible from the basis of crowdsourcing to say whether the election
was free and fair. Follow-up interviews were held with 30 respondents in order to find out why the
trained observers did not respond and meet expectations about their participation. On the surface,
problems that were common to all were that they had not received payment, that they did not want to
use credit for SMS, and, most importantly, that they did not like using SMS to communicate. That
was entirely different to how we envisage the use of technology in election. It entirely contradicted
how we think of the liberating use of technology for crowdsourcing. As a result, can we really say that
SMS was not the right solution to collect reports or was it just used as a pretext to not have observed
properly?
CONCLUSION
To “lie” is a part of the human condition and we know that it happens frequently. As technology has
changed how we work together, it has also changed how we perceive each other. While we can give
the benefit of the doubt to the crowd, we need to identify what is the level of this truth expressed by
the crowd, according to the context and from the understanding of the crowd of our mission.
Coming from a peaceful environment into a harsh or humanitarian survival context and trying to
assess the needs of children, for example, is very difficult and can lead to false results, as you can
easily misunderstand the information you receive and potential recipients will provide information
that they think will lead to the greatest benefit. Just as this problem exists without the use of
technology – traditional methods of soliciting information – so it exists with technology. Technology
does not solve the problem. Indeed, relying on the information received through use of technology can
be even less valid, as the crucial elements of personnel interaction and contextualization of
information can be reduced. If we succeed in building a clear understanding of each other, we need to
ensure that people are really committed to participate in the initiative, especially on civic projects
where you cannot force citizens to participate even if when you can provide payment.
A participative approach is certainly the best one for ensuring that we capture all relevant information
that we need to drive change, but it is essential that we also collect data from the environment where
this data was collected, to build a three-dimensional representation of the problem. Thus, when using
data and technology, we need to consider how the target perceives us and what we can do to engage
them to solve the issue. Further research - what we call “The role playing game” [6]
- is on-going to
understand how to represent an issue and its associated data within its context. If we want to make
accurate decisions – decisions which often affect people’s lives in important ways – it is essential that
these are based on accurate data, accurately interpreted. More of such research is urgently needed if
we are to ensure that we are able to crowdsource truth rather than lies, even if such “lies” are not
necessarily the opposite of the truth.
5
References
[1]https://dbrabham.files.wordpress.com/2012/09/brabham-2012-jacr-motivations-for-
participation-bw.pdf
[2]http://www.oxforddictionaries.com/definition/english/lying
[3] http://quickbase.intuit.com/blog/2012/03/19/know-when-someone-is-lying-7-types-of-lies/
[4] Crowd-sourcing corruption some challenges, some possible futures, Paper for Internet, Politics, Policy
2014: Crowdsourcing for Politics and Policy, Dieter Zinnbauer
[5] CAMEROON_ICE-paper -2014 by Joseph Pollack
[6] http://beatricemartini.it/blog/ftmtech-transparency-game/

More Related Content

More from mysociety(20)

Recently uploaded(20)

CXL at OCPCXL at OCP
CXL at OCP
CXL Forum183 views
The Research Portal of Catalonia: Growing more (information) & more (services)The Research Portal of Catalonia: Growing more (information) & more (services)
The Research Portal of Catalonia: Growing more (information) & more (services)
CSUC - Consorci de Serveis Universitaris de Catalunya51 views
Liqid: Composable CXL PreviewLiqid: Composable CXL Preview
Liqid: Composable CXL Preview
CXL Forum118 views
Java Platform Approach 1.0 - Picnic MeetupJava Platform Approach 1.0 - Picnic Meetup
Java Platform Approach 1.0 - Picnic Meetup
Rick Ossendrijver23 views
METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...
METHOD AND SYSTEM FOR PREDICTING OPTIMAL LOAD FOR WHICH THE YIELD IS MAXIMUM ...
Prity Khastgir IPR Strategic India Patent Attorney Amplify Innovation23 views

CIVIC ENGAGEMENT: THE PARADOX OF CROWDSOURCING LIES, A paper written by Jean Brice Tetka for The Impacts of Civic Technology Conference (TICTeC2015)

  • 1. 1 CIVIC ENGAGEMENT: THE PARADOX OF CROWDSOURCING LIES Avoiding untruths in the wisdom of the crowd This paper is for "The Impacts of Civic Technology Conference 2015", written by Jean Brice Tetka, jtetka@transparency.org INTRODUCTION Open data and crowdsourcing are two of the pillars now integrated into many development projects around the world. If the open data approach is targeting governments and institutions, crowdsourcing project are targeting citizen. Crowdsourcing is an online, distributed problem-solving and production model used largely by online businesses since 2000 (Brabham, 2008a; Howe, 2006a, 2008). The success of the crowdsourcing model depends on the assumption that online communities have ‘‘collective intelligence’’ (Levy, 1995/1997) or ‘‘crowd wisdom’’ (Surowiecki, 2004) [1] . Crowd- reporting is now embedded in the process of NGOs, companies, and governments to ensure that “the final user is satisfied with the product”. After a few years of experimentation, it seems that crowdsourcing is not an entirely reliable source of information, as the public is tempted to give “wrong” answers to questions depending upon their context, their needs or their understanding of the problem. If we try to put aside the marketing and hype around crowdsourcing and focus on the reality, the question to be answered is: how can we interpret information that we receive from the crowd and how can crowdsourcing be an effective tool of change? I. INTRODUCTION TO LIE Lying can be simply defined as “Not telling the truth” [2] , but a lie is not always the opposite of the truth. In a crowdsourcing project, when we collect data from the crowd we can see different types of “lies” from the public. Error - a lie by mistake. The person believes they are being truthful, but what they are saying is not true. Omission – leaving out relevant information. Easier and less risky. It doesn’t involve “inventing” any stories. It is passive deception and less guilt is involved. Restructuring - distorting the context. Saying something in sarcasm, changing the characters, or the altering the scene. Denial - refusing to acknowledge a truth. The extent of denial can be quite large—they may be lying only to you just this one time or they may be lying to themselves. Minimisation - reducing the effects of a mistake, a fault, or a judgment call. Exaggeration - representing as greater, better, more experienced, more successful.
  • 2. 2 Fabrication - deliberately inventing a false story. [3] It is understandable to think that humans always have the best intentions and that they do not “lie” knowingly, especially on issues that affect their lives. However with the difficulty of qualifying the intent to lie or tell the truth, the information received in the context of, say, an election or in a crisis or humanitarian emergency context cannot be properly analysed without being linked to the context in which the author is located or the engagement of the author participating in the initiative. Consequently, it is important that we first try to determine the context in which this information was generated and define its level of “truth” before starting a qualitative data analysis. II. THE CONTEXT OF CROWDSOURCING DATA When we try to collect information from the crowd, we normally expect concrete results. This is completely logical because our actions will be based on those results. If we take the case of when we ask the public to report fraud or violence in an election, do we really know what fraud means to them? Do we have the same understanding of violence? “Working with local partners works best” is one of the main messages from a first crop of comprehensive reviews of conventional social accountability initiatives that seek to engage citizens in reporting and monitoring functions (IDS 2011) [4] . However, it does not automatically mean that the citizens who reported told the “truth”. Let me give two examples to illustrate the challenge. The first involves a project called “Christmas for Street Children”, which aimed to give them gifts at the festive period. The second concerns monitoring of the 2013 election in Cameroon. Christmas For Street Children In the “Christmas for street children” project, the first step was to identify what they really need as priority gifts. The key to this was to understand what brought them on to the street. A group of around 30 street children was identified and we undertook some interviews to prioritise their needs and listen to their stories. Most of the stories were extremely sad and the top three things that they requested from us were; - Money: They went on to the street because their parents were poor or they were orphans or they needed money to return home - A secure place to sleep: On the street, young children are influenced by older children and forced into petty crime and thievery - Medication: They are always sick and need medication (or money) for their treatment Although these requirements were all highly important, we later discovered that they were not the key priorities. After crossing checking several sources of information, it turned out that the main priority was soap! While they might be able to survive the cold, violence and the dangers of the street, they cannot survive the awful skin diseases that are very difficult to treat. They interpreted our question “what do you most urgently need?” as an invitation to express all of their needs. The children gave us information that they thought we could – or what they would like us to - respond to, not necessarily what they urgently needed. Eventually, we identified the right information by spending some days with them on the street and talking to people who interact with them regularly.
  • 3. 3 The risks of this kind of possible misinterpretation is multiplied with increased use of technology, where there is even less opportunity for engagement, interaction, discussing answers and contextualisation of the information. While it is clear that we need to understand the context of the target group to better design our interventions, it is not necessarily clear that they also understand what we are doing and why we are doing it. Thus, there will be a big gap in mutual understanding and expectations, with the target’s perception of the initiative strongly influencing their answers. In this way, the answers are not “neutral” and the responses are not “correct”, but reflect the situation and expectations as to what will potentially follow. III. CIVIC ENGAGEMENT IN A CROWDSOURCING PROJECT Let’s move to election monitoring, a type of civic engagement which is even more focussed on using the crowd to bring about change; change which affects the whole of society, not just those who are invited to express their “needs”. Crowdsourcing has been adopted as a key method for election monitoring. Thousands of crowdsourcing projects have been launched over the last decade, but only a handful of them have been successful; by “successful”, we mean collecting large amounts of accurate data which can help determine whether the election was “free and fair”. Crowdsourcing for civic projects needs to ensure that citizens are committed to participate in the project and act in line with the aims of the project. What can happen if there is not adequate engagement of the crowd during an election monitoring project? I will try to answer this through my second example – election monitoring in Cameroon in 2013 – which, unfortunately, is on the long list of failures. Election Monitoring In Cameroon 2013 In a country where participation in the political process is too often viewed as a boring and burdensome task for citizens [5] , the challenge was not only to observe the election but also to create enthusiasm among observers. To overcome these challenges we designed a system of simple, unambiguous questions which would be sent to observers at timed intervals, thereby effectively “neutralising” the possibility for different subjective interpretations and overcoming lack of engagement or motivation. There answers would be sent by SMS. The questions sent to 1,373 verified recipients [5] were:  “Did the president of the local commission open the voting office on time, in the presence of other members of the commission, and observers and voters already present?” and “did the office open at 8am?” These questions were sent between 8 and 9am.  “Are the representatives of political parties and observers free to do their work?” and “were voters that were registered prevented from voting?” These questions were sent between 11am and 12pm.  “Were irregularities or disturbances observed”. This was sent between 2 and 3 pm.  “Did the offices close at 6pm?”, “were the representatives from political parties and observers allowed into the voting office after the official closure of voting?”, and “are the representatives from political parties and observers allowed to verify the counting operations?” were asked between 6 and 7pm.  Finally, the questions “were there irregularities during the counting operation?” and “did the publication of results follow electoral law?” were sent between 9 and 10pm.
  • 4. 4 The dataset created from the answers by the observers and third parties had 823 entries in total. This was drastically lower than expected. We experienced a slow and steady decrease in respondents texting in their observations until about lunch time, followed by a dramatic fall afterwards. To the last question “did the publication of results follow electoral law?” we received no answers at all. Based on those results, it was impossible from the basis of crowdsourcing to say whether the election was free and fair. Follow-up interviews were held with 30 respondents in order to find out why the trained observers did not respond and meet expectations about their participation. On the surface, problems that were common to all were that they had not received payment, that they did not want to use credit for SMS, and, most importantly, that they did not like using SMS to communicate. That was entirely different to how we envisage the use of technology in election. It entirely contradicted how we think of the liberating use of technology for crowdsourcing. As a result, can we really say that SMS was not the right solution to collect reports or was it just used as a pretext to not have observed properly? CONCLUSION To “lie” is a part of the human condition and we know that it happens frequently. As technology has changed how we work together, it has also changed how we perceive each other. While we can give the benefit of the doubt to the crowd, we need to identify what is the level of this truth expressed by the crowd, according to the context and from the understanding of the crowd of our mission. Coming from a peaceful environment into a harsh or humanitarian survival context and trying to assess the needs of children, for example, is very difficult and can lead to false results, as you can easily misunderstand the information you receive and potential recipients will provide information that they think will lead to the greatest benefit. Just as this problem exists without the use of technology – traditional methods of soliciting information – so it exists with technology. Technology does not solve the problem. Indeed, relying on the information received through use of technology can be even less valid, as the crucial elements of personnel interaction and contextualization of information can be reduced. If we succeed in building a clear understanding of each other, we need to ensure that people are really committed to participate in the initiative, especially on civic projects where you cannot force citizens to participate even if when you can provide payment. A participative approach is certainly the best one for ensuring that we capture all relevant information that we need to drive change, but it is essential that we also collect data from the environment where this data was collected, to build a three-dimensional representation of the problem. Thus, when using data and technology, we need to consider how the target perceives us and what we can do to engage them to solve the issue. Further research - what we call “The role playing game” [6] - is on-going to understand how to represent an issue and its associated data within its context. If we want to make accurate decisions – decisions which often affect people’s lives in important ways – it is essential that these are based on accurate data, accurately interpreted. More of such research is urgently needed if we are to ensure that we are able to crowdsource truth rather than lies, even if such “lies” are not necessarily the opposite of the truth.
  • 5. 5 References [1]https://dbrabham.files.wordpress.com/2012/09/brabham-2012-jacr-motivations-for- participation-bw.pdf [2]http://www.oxforddictionaries.com/definition/english/lying [3] http://quickbase.intuit.com/blog/2012/03/19/know-when-someone-is-lying-7-types-of-lies/ [4] Crowd-sourcing corruption some challenges, some possible futures, Paper for Internet, Politics, Policy 2014: Crowdsourcing for Politics and Policy, Dieter Zinnbauer [5] CAMEROON_ICE-paper -2014 by Joseph Pollack [6] http://beatricemartini.it/blog/ftmtech-transparency-game/