On-farm testing at scale
Yosef Gebrehawaryat
Alliance of Bioversity International and CIAT
Crowedsourcing Training 7- 8 June 2021,
Addis Ababa
Why are we doing on-farm
trials?
 Test varieties directly in target environments, under local
management practices and production conditions
 Test varieties for farmers’ and other end-users’ preferences
Why are we doing on-farm trials?
• Test varieties directly in target environments, under local
management practices and production conditions
• Test varieties for farmers’ and other end-users’ preferences
• Expose farmers to a range of varieties for diffusion
• Generate adequate variety recommendations for extension
But there are some problems with on-farm trials
• Fairly expensive: travel to sites, organize farmers, …
 Usually only a limited number of on-farm trials / limited scale,
and farmers in more remote rural areas often excluded
 On-farm trials still not so representative for the wide
range of target environments and diverse user groups
• Often relatively low data quality in spite of substantial efforts,
compared to on-station trials
But there are some problems with on-farm trials
• Fairly expensive: travel to sites, organize farmers, …
 Usually only a limited number of on-farm trials / limited scale, and
farmers in more remote rural areas often excluded
 On-farm trials still not so representative for the wide range of
target environments and diverse user groups
• Often relatively low data quality in spite of substantial efforts,
compared to on-station trials
We need more trials:
• To ensure that more target geographies are represented in the trials
• To compensate for lower data quality
But it needs to be manageable and not too expensive
Solution - direction
• Variety evaluation in the hands of farmers – cost reduction
• Farmers as motivated “citizen scientists” – invert incentives
• Rethink the statistics – should work for farmer observation
• Make it simple – little supervision and training
• Go digital – reduce errors, staff needs, ensure quick feedback
(Galton, 1907, Nature)
Wisdom of Crowds
Wisdom of Crowds
Crowdsourcing
Big
task
Micro
-task
1
Micro
-task
5
Micro
-task
2
Micro
-task
3
Micro
-task
4
Micro
-task
6
Micro
-task
7
Micro
-task
n
Job
done
Solution - ingredients
• Ranking as main approach for data collection: makes it easier
to assess the varieties and compare across sites
• Farms as Incomplete Blocks, instead of trying to replicate
everything on each farm (Atlin, 2002)
• Digital platform to streamline the process: for data collection
and analysis --> faster feedback to farmers
• Embrace variation in environment and crop management:
• not trying to control it, but trying to observe it
• not looking for an “average” or “general trend”, but representative of
range of target environments
How does it work?
• Each farmer receives a random combination of three
varieties, out of a larger set, as an incomplete block
• Samples are balanced sequentially
1
2
3
4
5
6
7
8
9
10
A
B
C
A
B
C
B A
C B
Best
foliage
Worst
height
Best
height
Worst
foliage
best worst
A > C > D
C > D > G
A > D > G
A>C>D>G
Plackett-Luce model
farmer 1
farmer 2
farmer 3
Exercise
• Grab a pen and a piece of paper and write down:
• Who is the oldest?
• Who is the youngest?
Trump Zuma Erdogan
Exercise – Analysis
Trump Zuma Erdogan
72 76 65
ClimMob platform
Nicaragua
common bean
Ranking - not a new solution
• Coe (2002) Analyzing rating and ranking data from participatory on-farm
trials, in Bellon and Reeves
• Simko and Piepho (2011) Combining phenotypic data from ordinal rating
scales in multiple plant experiments, Trends in Plant Science
• Halekoh and Kristensen (2008) Evaluation of treatment effects by ranking.
Journal of Agricultural Science
Ranking - advantages and disadvantages
+ Avoids drifting during the judgment process
+ Avoids different interpretations between judges of the scoring scale
+ Ranking is easier and faster to explain to participants than rating plus
judge calibration
– Ranking does not give an absolute zero or an absolute scale
Steinke et al. (2017) Agronomy for Sust. Devt.
10 steps
1. Define set of 8 to 12 promising varieties to evaluate, and multiply
2. Design tricot project, using free online software ClimMob
(www.climmob.net)
3. Recruit dedicated farmers who are interested in improving their farming
by getting to know new varieties
4. Prepare trial packages, with samples of 3 varieties in a randomized order,
as well as an observation card, and disseminate to participants
5. Participants plant received varieties separately in a mini-trial on their farm
6. Every participant is responsible for his/her trial, and makes various easy
observations during growth and after harvest (e.g. highest/lowest bunch
weight; best/worst taste), and mark these on the observation card
7. Local facilitators collect data from participants
8. Implementers compile and analyze data from all trials, using ClimMob
9. Implementers feed back information to every participant: names of their 3
varieties, which variety is most suited for their farm, and where to get
more seed
10.Tricot is an iterative process: after every project cycle, researchers,
implementers and farmers together evaluate how the process may be
improved in the next cycle
Read more …
• Steinke, J., van Etten, J. and Mejía Zelan, P. 2017. The accuracy of farmer-generated data in an agricultural
citizen science methodology. Agronomy for Sustainable Development 37: 32.
• Steinke, J., and van Etten, J. 2017. Design and validation of “AgroDuos”, a robust and engaging method for
farmer-participatory priority setting in plant breeding. Journal of Crop Improvement, Online.
• Beza, E., J. Steinke, J. van Etten, P. Reidsma, K. Lammert, C. Fadda, S. Mittra. 2017. What are the prospects for
large-N citizen science in agriculture? Evidence from three continents on motivation and mobile telephone use
of resource-poor farmers participating in "tricot" crop research trials. PLoS ONE 12(5): e0175700
• van Etten J., Steinke J., van Wijk M.T. 2017. How can the Data Revolution contribute to climate action in
smallholder agriculture? Agriculture for Development 30, 7.
• van Etten, J., E. Beza, L. Calderer, K van Duijvendijk, C. Fadda, B. Fantahun, Y.G. Kidane, J. van de Gevel, A.
Gupta, D.K. Mengistu, D. Kiambi, P. Mathur, L. Mercado, S. Mittra, M. Mollel, J.C. Rosas, J. Steinke, J.G. Suchini,
K. Zimmerer. First experiences with a novel farmer citizen science approach: Crowdsourcing participatory
variety selection through on-farm triadic comparisons of technologies (tricot). Experimental Agriculture,
Online.
• Steinke, J., and J. van Etten. 2016. Farmer experimentation for climate adaptation with triadic comparisons of
technologies (tricot). A methodological guide. Rome: Bioversity International. (English and Spanish).
• van Etten, J., E. Beza, L. Calderer, K van Duijvendijk, C. Fadda, B. Fantahun, Y.G. Kidane, J. van de Gevel, A.
Gupta, D.K. Mengistu, D. Kiambi, P. Mathur, L. Mercado, S. Mittra, M. Mollel, J.C. Rosas, J. Steinke, J.G. Suchini,
K. Zimmerer. First experiences with a novel farmer citizen science approach: Crowdsourcing participatory
variety selection through on-farm triadic comparisons of technologies (tricot). Experimental Agriculture,
Online.
Implementation guide
Steinke, J., and J. van Etten. 2016. Farmer
experimentation for climate adaptation
with triadic comparisons of
technologies (tricot). A methodological
guide. Rome: Bioversity International.
(English and Spanish).
https://www.bioversityinternational.org/e-
library/publications/detail/farmer-
experimentation-for-climate-adaptation-
with-triadic-comparisons-of-technologies-
tricota-methodological-guide/
Step 1
Define set of promising varieties to evaluate, and multiply
Step 2 Design project
Step 3 Recruit farmers
Through cooperatives
Sampling with gender and equity considerations
Skill based?
Step 4 Prepare trial packages
Every farmer receives 3 out of 10 - 50 varieties
Farmer does not know their names at first (just “A”, ”B”, ”C”)
Sequentially balanced randomization
Example of randomization of 7 rice varieties
Step 5 Delivery to participants
Step 6 Farmers observe and record
Step 7 Field agents collect data
Multiple channels are possible:
- Telephone calls
- Farm visits
- Data collection through lead farmers
- Data collection during joint activities
Data entry through:
- Mobile phone app Open Data Kit (preferred) with GPS point
- Online application
- Spreadsheet (as little as possible)
Step 8 Compile and analyze data
Step 9 Discussion with farmers
Farmers learn names of
the 3 varieties
Full variety ranking
according to all farmers
Project partners in Ethiopia
Africa Research in Sustainable Intensification for the Next Generation
africa-rising.net
This presentation is licensed for use under the Creative Commons Attribution 4.0 International Licence.
Thank You

Ar training 2021

  • 1.
    On-farm testing atscale Yosef Gebrehawaryat Alliance of Bioversity International and CIAT Crowedsourcing Training 7- 8 June 2021, Addis Ababa
  • 2.
    Why are wedoing on-farm trials?  Test varieties directly in target environments, under local management practices and production conditions  Test varieties for farmers’ and other end-users’ preferences
  • 3.
    Why are wedoing on-farm trials? • Test varieties directly in target environments, under local management practices and production conditions • Test varieties for farmers’ and other end-users’ preferences • Expose farmers to a range of varieties for diffusion • Generate adequate variety recommendations for extension
  • 4.
    But there aresome problems with on-farm trials • Fairly expensive: travel to sites, organize farmers, …  Usually only a limited number of on-farm trials / limited scale, and farmers in more remote rural areas often excluded  On-farm trials still not so representative for the wide range of target environments and diverse user groups • Often relatively low data quality in spite of substantial efforts, compared to on-station trials
  • 5.
    But there aresome problems with on-farm trials • Fairly expensive: travel to sites, organize farmers, …  Usually only a limited number of on-farm trials / limited scale, and farmers in more remote rural areas often excluded  On-farm trials still not so representative for the wide range of target environments and diverse user groups • Often relatively low data quality in spite of substantial efforts, compared to on-station trials We need more trials: • To ensure that more target geographies are represented in the trials • To compensate for lower data quality But it needs to be manageable and not too expensive
  • 7.
    Solution - direction •Variety evaluation in the hands of farmers – cost reduction • Farmers as motivated “citizen scientists” – invert incentives • Rethink the statistics – should work for farmer observation • Make it simple – little supervision and training • Go digital – reduce errors, staff needs, ensure quick feedback
  • 8.
  • 9.
  • 10.
  • 11.
    Solution - ingredients •Ranking as main approach for data collection: makes it easier to assess the varieties and compare across sites • Farms as Incomplete Blocks, instead of trying to replicate everything on each farm (Atlin, 2002) • Digital platform to streamline the process: for data collection and analysis --> faster feedback to farmers • Embrace variation in environment and crop management: • not trying to control it, but trying to observe it • not looking for an “average” or “general trend”, but representative of range of target environments
  • 12.
    How does itwork? • Each farmer receives a random combination of three varieties, out of a larger set, as an incomplete block • Samples are balanced sequentially 1 2 3 4 5 6 7 8 9 10 A B C A B C
  • 15.
  • 16.
    best worst A >C > D C > D > G A > D > G A>C>D>G Plackett-Luce model farmer 1 farmer 2 farmer 3
  • 17.
    Exercise • Grab apen and a piece of paper and write down: • Who is the oldest? • Who is the youngest? Trump Zuma Erdogan
  • 19.
    Exercise – Analysis TrumpZuma Erdogan 72 76 65
  • 20.
  • 21.
  • 23.
    Ranking - nota new solution • Coe (2002) Analyzing rating and ranking data from participatory on-farm trials, in Bellon and Reeves • Simko and Piepho (2011) Combining phenotypic data from ordinal rating scales in multiple plant experiments, Trends in Plant Science • Halekoh and Kristensen (2008) Evaluation of treatment effects by ranking. Journal of Agricultural Science
  • 24.
    Ranking - advantagesand disadvantages + Avoids drifting during the judgment process + Avoids different interpretations between judges of the scoring scale + Ranking is easier and faster to explain to participants than rating plus judge calibration – Ranking does not give an absolute zero or an absolute scale
  • 25.
    Steinke et al.(2017) Agronomy for Sust. Devt.
  • 26.
    10 steps 1. Defineset of 8 to 12 promising varieties to evaluate, and multiply 2. Design tricot project, using free online software ClimMob (www.climmob.net) 3. Recruit dedicated farmers who are interested in improving their farming by getting to know new varieties 4. Prepare trial packages, with samples of 3 varieties in a randomized order, as well as an observation card, and disseminate to participants 5. Participants plant received varieties separately in a mini-trial on their farm 6. Every participant is responsible for his/her trial, and makes various easy observations during growth and after harvest (e.g. highest/lowest bunch weight; best/worst taste), and mark these on the observation card 7. Local facilitators collect data from participants 8. Implementers compile and analyze data from all trials, using ClimMob 9. Implementers feed back information to every participant: names of their 3 varieties, which variety is most suited for their farm, and where to get more seed 10.Tricot is an iterative process: after every project cycle, researchers, implementers and farmers together evaluate how the process may be improved in the next cycle
  • 27.
    Read more … •Steinke, J., van Etten, J. and Mejía Zelan, P. 2017. The accuracy of farmer-generated data in an agricultural citizen science methodology. Agronomy for Sustainable Development 37: 32. • Steinke, J., and van Etten, J. 2017. Design and validation of “AgroDuos”, a robust and engaging method for farmer-participatory priority setting in plant breeding. Journal of Crop Improvement, Online. • Beza, E., J. Steinke, J. van Etten, P. Reidsma, K. Lammert, C. Fadda, S. Mittra. 2017. What are the prospects for large-N citizen science in agriculture? Evidence from three continents on motivation and mobile telephone use of resource-poor farmers participating in "tricot" crop research trials. PLoS ONE 12(5): e0175700 • van Etten J., Steinke J., van Wijk M.T. 2017. How can the Data Revolution contribute to climate action in smallholder agriculture? Agriculture for Development 30, 7. • van Etten, J., E. Beza, L. Calderer, K van Duijvendijk, C. Fadda, B. Fantahun, Y.G. Kidane, J. van de Gevel, A. Gupta, D.K. Mengistu, D. Kiambi, P. Mathur, L. Mercado, S. Mittra, M. Mollel, J.C. Rosas, J. Steinke, J.G. Suchini, K. Zimmerer. First experiences with a novel farmer citizen science approach: Crowdsourcing participatory variety selection through on-farm triadic comparisons of technologies (tricot). Experimental Agriculture, Online. • Steinke, J., and J. van Etten. 2016. Farmer experimentation for climate adaptation with triadic comparisons of technologies (tricot). A methodological guide. Rome: Bioversity International. (English and Spanish). • van Etten, J., E. Beza, L. Calderer, K van Duijvendijk, C. Fadda, B. Fantahun, Y.G. Kidane, J. van de Gevel, A. Gupta, D.K. Mengistu, D. Kiambi, P. Mathur, L. Mercado, S. Mittra, M. Mollel, J.C. Rosas, J. Steinke, J.G. Suchini, K. Zimmerer. First experiences with a novel farmer citizen science approach: Crowdsourcing participatory variety selection through on-farm triadic comparisons of technologies (tricot). Experimental Agriculture, Online.
  • 28.
    Implementation guide Steinke, J.,and J. van Etten. 2016. Farmer experimentation for climate adaptation with triadic comparisons of technologies (tricot). A methodological guide. Rome: Bioversity International. (English and Spanish). https://www.bioversityinternational.org/e- library/publications/detail/farmer- experimentation-for-climate-adaptation- with-triadic-comparisons-of-technologies- tricota-methodological-guide/
  • 30.
    Step 1 Define setof promising varieties to evaluate, and multiply
  • 31.
  • 32.
    Step 3 Recruitfarmers Through cooperatives Sampling with gender and equity considerations Skill based?
  • 33.
    Step 4 Preparetrial packages Every farmer receives 3 out of 10 - 50 varieties Farmer does not know their names at first (just “A”, ”B”, ”C”) Sequentially balanced randomization
  • 34.
    Example of randomizationof 7 rice varieties
  • 35.
    Step 5 Deliveryto participants
  • 36.
    Step 6 Farmersobserve and record
  • 37.
    Step 7 Fieldagents collect data Multiple channels are possible: - Telephone calls - Farm visits - Data collection through lead farmers - Data collection during joint activities Data entry through: - Mobile phone app Open Data Kit (preferred) with GPS point - Online application - Spreadsheet (as little as possible)
  • 38.
    Step 8 Compileand analyze data
  • 39.
    Step 9 Discussionwith farmers Farmers learn names of the 3 varieties Full variety ranking according to all farmers
  • 40.
  • 41.
    Africa Research inSustainable Intensification for the Next Generation africa-rising.net This presentation is licensed for use under the Creative Commons Attribution 4.0 International Licence. Thank You

Editor's Notes

  • #20 Wisdom of Crowds! Although nobody knew age for sure, collectively we got it right
  • #41 Ethiopia