Supporting PDF accessibility evaluation: Early results from the FixRep project

  • 3,453 views
Uploaded on

This presentation presents results from a pilot study exploring automated formal metadata extraction in accessibility evaluation. We demonstrate a prototype created during the FixRep project that aims …

This presentation presents results from a pilot study exploring automated formal metadata extraction in accessibility evaluation. We demonstrate a prototype created during the FixRep project that aims to support capture, storage and reuse of accessibility information where available, and to approach the problem of reconstructing required data from available sources.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
3,453
On Slideshare
0
From Embeds
0
Number of Embeds
8

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Supporting PDF accessibility evaluation: early results from the FixRep project Andrew Hewson & Emma Tonkin [email_address] e.tonkin @ukoln.ac.uk
  • 2. Introduction
    • UKOLN:
    • Based at the University of Bath in the UK is:
    • "A centre of excellence in digital information management, providing advice and services to the library, information and cultural heritage communities.”
    • FixRep:
    • An 18 month project aiming to examine existing techniques and implementations for automated formal metadata extraction, with the aim of enabling metadata triage
  • 3. What is formal metadata?
    • Formal metadata;
    • Includes information such as filetype, title, author and image captions
    • Is mostly intrinsic to the document and its citation.
    • Could it include information of relevance to accessibility?
  • 4. What is accessibility?
    • Web accessibility means that people with disabilities can use the Web. ( http://www.w3.org/WAI/intro/accessibility.php )
    • capable of being reached;
    • capable of being read with comprehension;
    • easily obtained;
    • easy to get along with or talk to; friendly;
    • ( http://wordnetweb.princeton.edu/perl/webwn )
  • 5. PDF format
    • Web-based uses of relevance to digital libraries for example include:
    • forms
    • printable versions of resources
    • pre-prints of papers and articles.
    • A very common format found in institutional repositories.
  • 6. Document accessibility
    • Can we aspire to a perfectly accessible repository?
    • Careful editing / repository management takes time and is labour intensive for administrators and users.
    • Finding a balance between quantity and quality, i.e. maximising usability of repository content, is the realistic goal.
    • Not strict validation, but support for user level review / triage.
  • 7. Research questions
    • What span of content appears in a document repository that enables user deposit?
    • Does this variation in document format imply a reduction in accessibility, what sort of reduction, to whom, and to what extent?
    • Is it possible for us to automatically identify issues that may be of particular concern, or for us to identify good practice where it is used?
    • Separating non-optimal features from show-stopper problems.
  • 8. Methodology #1: Prototype
    • A prototype has been developed for analysis of PDFs. This extracts information about the document in a number of ways:
    • Header and formatting analysis
    • Information from the body of the document
    • Information from the originating filesystem
    • Based on Unix tools the prototype has been developed in Perl using pdfinfo, pdftotext, and pdfimages, as well as a number of CPAN modules.
    • It uses a REST service API
  • 9. Methodology #2: Pilot Case Study
    • OPUS Repository (University of Bath)
    • Spidered site to identify PDFs
    • PDFs cached offline
    • Analysed via batch process
    • Responses placed in MySql database
    • Data analysis process completed manually via SQL queries.
    • Automation of analysis process goal for future iterations of project.
  • 10. Results
    • Proportion of documents successfully processed
    • 80% were successfully batch processed with the results stored in the database
    • The 20% that failed exhibited two categories of errors:
    • No metadata was available for extraction
    • Format of file unsupported by toolset
  • 11. Results
    • XML Tag use
    • Small number of tags used (26)
    • Usage was consistent (average 21, mode 21)
    • Some ‘traditional’ tags were absent in most cases(author, title, etc.)
  • 12. Results
    • PDF Versions
    • Most popular version seems to be 1.4 – however this might be attributable to the ‘Creator’ software used to generated the PDFs in the sample: in particular due to the addition of a ‘cover sheet’ before being added to the OPUS repository.
  • 13. Results
    • ‘ Producer’ and ‘Creator’
    • These two tags both show disproportionate favouritism for two applications (compared with an expected normal distribution)
    • It is likely, as with the favoured PDF version, that is an artefact of the cover sheet addition to the PDFs.
    hello
  • 14. Discussion
    • The ‘cover sheet’ issue
    • As mentioned, a cover sheet has been prepended to many of the PDFs examined.
    • This might not seem to be an issue, however, as can be seen here it might confuse automated systems, rendering the metadata virtually useless
  • 15. Conclusions
    • Good news! More tagged PDFs around than expected.
    • Bad news! We may ‘shooting ourselves in the foot’ with additions like after-the-fact cover sheets. This may remove original metadata that could have been utilised for machine learning.
    • This prototype tools has already proved very useful and we plan to develop it further.