Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Project Facilitators:
Mashkoor Malik, NOAA, USA
Giuseppe Masetti, UNH CCOM/JHC, USA
Alexandre Schimel, NIWA, New Zealand
M...
BSIP → Rationale
Significant differences in backscatter products
generated by different software
using the same dataset
Ma...
Example of end user frustration
(mosaics comparison)
End result of backscatter mosaic offers little insight as to
what wen...
BSIP → Processing Steps
BL0
Level “as read from
datagram”
BL3
Level after all corrections
applied before mosaicking
BSIP r...
BSIP dataset
EM 2040 Bathymetry
Software results match !!
-11.5 dB
-15 dB
-17 dB
EM 2040 Backscatter
CARIS BL3
CARIS BL0
FMGT BL3
FMGT BL0
SonarScope BL0
SonarScope BL3
EM 2040 Backscatter
Large differences not an isolated case !!
EM 302 EM 710
RESON 7125
Intermediate processing stages enable further insights
BL3 - BL0
(CARIS BL3 - CARIS BL0) - (FMGT BL3 - FMGT BL0)
(CARIS BL...
Conclusions
● Intermediate processing stages provides insights into differences
between software outputs
○ Differences in ...
Questions ?
Alexandre C. G. Schimel (alexandre.schimel@niwa.co.nz)
Mashkoor Malik (mashkoor.malik@noaa.gov)
Marc Roche (Ma...
Upcoming SlideShare
Loading in …5
×

BSIP: Backscatter Software Intercomparison Project - Preliminary Evaluation of Multibeam Backscatter Consistency through Comparison of Intermediate Processing Results

56 views

Published on

Authors: M. Malik, G. Masetti, A.C.G. Schimel, M. Roche, M. Dolan, J. Le Deunf
The presentation was given at the US Hydro 2019 Conference.
Abstract:
Although backscatter mosaics of the seafloor are now routinely produced from multibeam sonar data, significant differences have been observed in the products generated by different software when processing the same dataset. This represents a major limitation to a number of possible uses of backscatter mosaics, including quantitative analysis, monitoring seafloor change over time, and combining results from multiple data sources. A recently published study from the Backscatter Working Group: established under auspices of GEOHAB (http://geohab.org/) and consisting of more than 300 researchers representing academia, governments and industry (Lurton et al., 2015) also highlighted this issue. The study recommended that “to check the consistency of the processing results provided by various software suites, initiatives promoting comparative tests on common data sets should be encouraged […]”. With the aim of facilitating such a comparison, the Backscatter Software Intercomparison Project (BSIP) was launched in May 2018. Software developers were invited to actively participate in BSIP and discuss how the inconsistencies might be overcome and, or at least made more transparent. To date, the developers of four software packages (CARIS SIPS, MB Process, QPS FMGT, and Sonarscope) have actively collaborated on this project and other interested software vendors are encouraged to participate in this project.

Since backscatter data processing is a complex and (as yet) non-standardized sequence of steps, the root causes of observed differences in the end-results derived using different software packages are difficult to pinpoint. It is thus necessary to obtain data at intermediate stages of processing sequences. We provided software developers with several small datasets collected using different multibeam sonar models and asked them, at this initial stage of the project, to generate intermediate processing results focused on the output of the first stages of processing (i.e., as read by the software tools) as well as the fully processed results. Large differences between software outputs were observed. A major observation, even at this early stage of the project, was that in the absence of accepted standards, different software have adopted different methods to generate the initial backscatter value per beam from the raw data (snippets), prior to starting the processing sequence. This initial difference is critical and hinders any comparison of the subsequent steps during backscatter processing. We conclude by presenting our plans for the next steps of the project including working closely with commercial software vendors in finding ways to overcome this limitation, as well as standardizing outputs and terminology.

Published in: Science
  • Be the first to comment

  • Be the first to like this

BSIP: Backscatter Software Intercomparison Project - Preliminary Evaluation of Multibeam Backscatter Consistency through Comparison of Intermediate Processing Results

  1. 1. Project Facilitators: Mashkoor Malik, NOAA, USA Giuseppe Masetti, UNH CCOM/JHC, USA Alexandre Schimel, NIWA, New Zealand Marc Roche, ECONOMIE, Belgium Margaret Dolan, NGU, Norway Julian Le Deunf, SHOM, France US Hydro Conference March 2019 Biloxi, MS, USA BSIP: Backscatter Software Intercomparison Project Preliminary Evaluation of Multibeam Backscatter Consistency through Comparison of Intermediate Processing Results Project Collaborators: SonarScope, IFREMER FMGT, QPS HIPS & SIPS, Teledyne CARIS MB Process, Curtin University, CMST
  2. 2. BSIP → Rationale Significant differences in backscatter products generated by different software using the same dataset Major limitation for: Quantitative analysis Combining multiple sources Time-monitoring of seafloor changes Data quality validation
  3. 3. Example of end user frustration (mosaics comparison) End result of backscatter mosaic offers little insight as to what went wrong and where? Lucieer et al. (2017); Roche et al. (2018)
  4. 4. BSIP → Processing Steps BL0 Level “as read from datagram” BL3 Level after all corrections applied before mosaicking BSIP requested intermediate levels (provided by software developers) Currently produced results
  5. 5. BSIP dataset
  6. 6. EM 2040 Bathymetry Software results match !!
  7. 7. -11.5 dB -15 dB -17 dB EM 2040 Backscatter CARIS BL3 CARIS BL0 FMGT BL3 FMGT BL0 SonarScope BL0 SonarScope BL3
  8. 8. EM 2040 Backscatter
  9. 9. Large differences not an isolated case !! EM 302 EM 710 RESON 7125
  10. 10. Intermediate processing stages enable further insights BL3 - BL0 (CARIS BL3 - CARIS BL0) - (FMGT BL3 - FMGT BL0) (CARIS BL0 - FMGT BL0) → Processing approach between BL0 and BL3 is different Difference in processing relative to difference in starting value → Differences in starting value explains the difference in processed results for most of the soundings CARIS FMGT Sonar Scope CARIS / FMGT FMGT/Sonar Scope CARIS / Sonar Scope
  11. 11. Conclusions ● Intermediate processing stages provides insights into differences between software outputs ○ Differences in level “as read in the datagrams” BL0 a surprise ● A variety of processing approaches available ○ Improved tools needed to understand impact of one choice vs. another ● Next steps ○ Round 2 processing in progress to provide other intermediate stages (corrections) ● We need your help !! ○ Users: To demand that results processed by different software should agree with each other ○ Software developers: To work together to implement agreed best practices for backscatter processing ○ BSWG: To provide a platform to facilitate these discussions
  12. 12. Questions ? Alexandre C. G. Schimel (alexandre.schimel@niwa.co.nz) Mashkoor Malik (mashkoor.malik@noaa.gov) Marc Roche (Marc.Roche@economie.fgov.be) Giuseppe Masetti (gmasetti@ccom.unh.edu) Margaret Dolan (Margaret.Dolan@ngu.no) Julian Le Deunf (julian.le.deunf@shom.fr) Thanks to software developers

×