MCN 2013 (ffrench) From Documentation to Discovery: Preservation Photographic Imaging Leaps from the Illustrative to the Quantitive
Upcoming SlideShare
Loading in...5
×
 

MCN 2013 (ffrench) From Documentation to Discovery: Preservation Photographic Imaging Leaps from the Illustrative to the Quantitive

on

  • 174 views

During this panel presentation information was shared on a collaborative project between the Yale University Art Gallery and the Yale CS department. Staff established that significant imaging data ...

During this panel presentation information was shared on a collaborative project between the Yale University Art Gallery and the Yale CS department. Staff established that significant imaging data potentially crucial to the work of restoring damaged paintings, could be improved by leveraging the combined strengths of multiple modalities. we therefore aimed to undertake a collaborative exploratory project, with the assistance of Post-Doc students in the Computing and the Arts department of Computer Science at Yale, to design new software that would allow these modalities to be used together.

For further information contact the presenter.

Statistics

Views

Total Views
174
Views on SlideShare
174
Embed Views
0

Actions

Likes
1
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • My name is John ffrench and I am the Director of Visual Resources at the Yale University Art Gallery. I oversee the imaging department which photographs the collection and also events, exhibitions, etc as well as overseeing the Rights and Reproductions office who deal with the dissemination of images for external requests. With the project I am discussing today, I was more of an organizing participant or intermediary if you will between the conservation staff and the computer science group. It is important to know a few things before we begin. I am NOT a computer scientist – though I can claim to have several as good friends. And I am not a conservator, though my group does provide treatment photography for the conservation department. Also I happen to be married to a conservator so that helps! <br /> Recently, new two and three dimensional imaging modalities have been found useful in gaining insight into the restoration of damaged paintings. Non-Invasive Imaging is particularly useful in restoration as it provides extensive information about a work without physical contact. However, combining the results of different modalities is extremely difficult, and conservators generally use information from each mode in isolation. In this project, software was developed to overcome the barriers in combining data and to create an intuitive interface for conservators to examine works. The software allows the conservator to combine images in a common view and identify the same region in the work in multiple images simultaneously. The software further allows the conservator to combine data values to identify materials and types of damage. The methods implemented build on existing ideas from medical imaging – an area that also requires the combination of different modalities. The software developed is in modules distributed as open source. The open source model has proven successful both in making software freely available to users and in allowing other developers to expand the capabilities of the software without having to “reinvent the wheel.” <br />
  • In an interdisciplinary collaboration, art and computer imaging experts from the Yale University Art Gallery and the Department of Computer Science began a project in 2011 to examine selected Early Italian panel paintings combining a variety of imaging techniques that include digital photography, 3D scanning, tomography and a novel form of photography called Polynomial Texture Mapping (PTM).At that time no software existed that allowed for the various imaging modalities to share common coordinate systems. For example, it was not possible to overlay the PTM data on the correct corresponding section of a 3D model.Irma Passeri, a conservator at the Art Gallery’s laboratory, working with Holly Rushmeier, Professor of Computer Science, established that significant imaging data potentially crucial to the work of restoring damaged paintings, could be improved by leveraging the combined strengths of multiple modalities. Professor Rushmeier therefore aimed to undertake an exploratory project in collaboration with Ms. Passeri, with the assistance of Post-Doc students in the Computing and the Arts department of Computer Science at Yale, to design new software that would allow these modalities to be used together.When fully refined and tested, the software application was be made available as an open source product. <br />
  • In 2011, while there were means to display information showing various conservation image types, there was no way for a conservator to do comparative analysis of different file types without using several programs. <br /> There were certainly programs such as the Ghent altarpiece Project, however these were more a means of education for the public than that of a research tool. <br /> Also Yale developed West Campus, a off-site arts and sciences area which has, or will have shared and collaborative imaging labs, as well as conservation labs. The participants of those spaces were looking for ways in which to create new means of Campus collaboration and a project like this was a good test-bed. <br />
  • In extreme short, the project aimed to provide a means for varying images to be overlaid with each other in a computer environment to better enable conservators/curators the ability to study works of art (image sources were traditional photography - historic images and modern, multi-spectral imaging, UV, IR, xray, CAT scans, PTM, and 3D laser scans to name a few). <br /> This was seen as an exciting collaboration with the Computer Science department and one of the cultural institutions on campus as start of many more collaboration projects on campus. <br /> Initial 1 year grant was awarded in the amount of $80,000 which would cover the acquisition of capture, equipment for the capture and research of objects (NextEngine 3D scanner, materials to build a hyper-spectral camera, computer systems to store data and build out the software interface), and funds to cover 50% of the salary of a programmer/imaging scientist. <br /> Yale University Art Gallery would provide the object(s), studio space and 20% of staff time of a conservator and imaging specialist. <br /> Capture was one day a week for 6 months, the other days the data was compiled and the beginnings of wire-framing and coding the software began. <br /> After the core data was captured the remaining 6 months were spent creating the program and writing the documentation for user interface as well as a paper for a CS <br /> Talk being held in the UK. (VAST 2012) <br /> Little time was spent in usability studies. <br />
  • I attempt to run the software at the end of this talk, but for now will step you through a few of the features of the program. We all know how live-demo scenarios often go and given that this program is processor intensive. <br /> Program is Mac/PC based – though admittedly it performs best on a PC. <br />
  • There has been limited demonstration of the product to the audience it was <br /> Initially destined for. At last years MCN it was shown to a few people attending (hence the interest and request to present our findings this year). <br /> It was going to be presented at IS&T however a conflict of interest required us to pull out at the last minute. <br />
  • The source code and supporting documentation is posted on Sourceforge <br /> At the recent VAST conference a collaborative paper was written on the project <br /> There has been limited demonstration of the product to the audience it was Initially destined for. At last years MCN it was shown to a few people attending (hence the interest and request to present our findings this year). It was going to be presented at IS&T however a conflict of interest in reporting on the same findings as announced at the VAST conference required us to pull out at the last minute. <br />
  • Currently the program is being further developed with new CS programmers and there is focus on a new imaging project underway to study medieval manuscripts and the pigments used. Through that project, additional tools/features are being added. These new tools, while potentially useful to the initial participants, are more aimed at the needs of the 2nd phase supporters of the project than further establishing the core needs of the original group. <br /> Unfortunately the initial partners of the project are not using the program as intended the new direction of the program is more in support of a previously unknown need, <br />
  • WHAT NEXT?? Projects take time and money to complete, but we don’t always plan into project time for usability testing, or promotion of products. It is an important step to factor in. <br /> Getting this into more test-users hands, ideally conservators who have the need, but can also provide valuable feedback and suggest further development. <br /> While we are starting to see some level of conservators and imaging scientist express interest in the program, sharing this program through conservation circles, such as AIC, would <br /> be the next logical step and/or finding partners outside of Yale to work with on the collaborative development of such programs. <br /> Ideally trying to work closer with another group interested in, or working towards similar ends would be logical. <br /> More so in recent years, Yale is keen to openly share information, resources and is looking for wider collaboration. But we as a community need to find better ways in which to bring awareness of such programs. <br />

MCN 2013 (ffrench) From Documentation to Discovery: Preservation Photographic Imaging Leaps from the Illustrative to the Quantitive MCN 2013 (ffrench) From Documentation to Discovery: Preservation Photographic Imaging Leaps from the Illustrative to the Quantitive Presentation Transcript

  • Novel Software to Improve Three Dimensional Imaging of Works of Art Developing Open-Source Software for Art Conservators John ffrench – Director of Visual Resources Yale University Art Gallery Museum Computer Network, Montreal Canada Friday November 22nd, 2013 - 3:30pm - 5:00pm
  • Closer to Van Eyck: Rediscovering the Ghent Altarpiece http://closertovaneyck.kikirpa.be
  • 3D Models Hyper3D can read the following three-dimensional file formats: • Wavefront OBJ (.obj) • Polygon File Format (.ply) • Virtual Reality Modeling Language (.wrl) 2D Files • High dynamic ramge (.exr) • Portable Network Graphics (.png) • Bitmap (.bmp) • Joint Photographic Experts Group (.jpg) • DICOM: CT data (.dcm)
  • Screenshot of the software with the two different example objects. Four different types of image data are loaded: (from the top left) a 3D scan model, CT 2D stack visualization, the color visualization of a multispectral image with eight spectral channels (the spectral plot widget on the right bottom illustrates the spectral reading of the red region), and volume rendering of the polychrome panel. The righthand-side column shows the light controls for the 3D model/volume rendering visualization (key/fill light), CT stack data navigator and volume rendering options. At the bottom of the column, multispectral spectrum is scientifically visualized on the surface point followed by the mouse pointer
  • (a) shows the basic toolbar interface for convenient access to functions. (b) widgets for controlling the intensity of key/fill virtual lights. (c) CT medical imaging data navigation options of stack and volume rendering.
  • Medical Imaging Hyper3D is capable of opening medical images that are compliant with the Digital Imaging and Communications in Medicine (DICOM) standard.
  • Current Status of the Software (Left, Middle, Right Columns: Hyperspectral 2D Imaging, 3D Scanning, Controls and Data Set Details)
  • http://sourceforge.net/projects/hyper3d/ Min H. Kim, Holly Rushmeier, John ffrench, Irma Passeri (2012), “Developing Open-Source Software for Art Conservators,” (VAST 2012), Brighton, UK, 19–21 November, 2012, pp. 97–104 (Received a Best Paper Award)
  • Spectral Annotation New tool within Hyper3D to visualize and export annotations about multi-spectral data
  • Acknowledgements We are grateful for the guidance and encouragement of this project’s principal investigator, Prof. Holly Rushmeier. We would also like to thank the Seaver Institute for their generous support for the software’s development. Initial Design and Coding: Min H. Kim (Yale University Post-Doc) Kaist, Computer Science Dept. South Korea Further Software Development: David Tidmarsh Ruggero Pintus John.ffrench@yale.edu John ffrench – Director of Visual Resources Yale University Art Gallery Museum Computer Network, Montreal Canada Friday November 22nd, 2013 - 3:30pm - 5:00pm