CMSC 197 Journal Reporting


Published on

A high-level 3D visualization API for Java and ImageJ

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

CMSC 197 Journal Reporting

  1. 1. A high-level 3D visualization API for Java and ImageJ
  2. 2. BACKGROUND Current imaging methods such asMagnetic Resonance Imaging (MRI), Confocalmicroscopy, Electron Microscopy (EM) orSelective Plane Illumination Microscopy(SPIM) yield three-dimensional (3D) data sets inneed of appropriate computational methods fortheir analysis. The reconstruction, segmentation andregistration are best approached from the 3Drepresentation of the data set.
  3. 3. BACKGROUND Life sciences are experiencing anincreasing demand for scientific image processing.Images are the primary data of developmental andcell biology. The number of images is exploding withthe availability of high-throughput and high-resolution technologies. The acquisition of large three dimensional(3D) data sets, often as time series (4D), hasbecome the new standard.
  4. 4. BACKGROUND The first step in the analysis of 1 means to load and save any ofbiological image data is its visual the large diversity of image fileinspection. In addition to the general formats;requirement for visualization, the uniquecharacteristics of each data set may 2 implementations for computerdemand specialized analysis. vision algorithms; and The development of novel 3 graphical user interfaces foranalytical tools is greatly facilitated by the data access by a humanexistence of well-documented software operator.libraries. These libraries must provide:
  5. 5. BACKGROUND We have identified a lack ofaccessible 3D/4D visualization softwarelibraries for biological image processing. Numerous image processingpackages exist, either commercial or opensource. While end-users benefit from well-documented, special-purpose commercialapplications, the development of customanalytical tools is better handled by opensource packages.
  6. 6. BACKGROUND We have created a software library for 3D/4D visualization, withfunctions for surface extraction, volume rendering and interactive volumeediting. Our library removes all the complexity of creating and interacting withimage volumes and meshes in a 3D environment. 3D Ultrasound We have designed our libraryto enrich the core functionality of ImageJ,an open source image processingapplication.
  7. 7. BACKGROUND Via ImageJ, our library has access to hundreds of biological image fileformats. Over the years, the scientific community has contributed a very largenumber of ImageJ extensions, known as plugins, which provide readily accessibleimplementations of numerous computer vision algorithms. With our library, we empower the ImageJ scientific community to rapidlyimplement custom analytical tools for 3D/4D data sets, with a minimal investmentof time and resources in handling the complex details of a hardware accelerated3D environment.
  8. 8. IMPLEMENTATION Our software library implements the concept of three- dimensional scene ("3D scene") for the interactive visualization of biomedical image data. The 3D Three display modes supported by the scene is a virtual 3D space in framework. An example image volume containing an adult which any number of entities areDrosophila brain, shown (a) as a volume rendering, (b) as hosted. These entities are volume orthoslices and (c) as an isosurface of its external renderings, isosurfaces and contour. orthoslice sets.
  9. 9. IMPLEMENTATION Volume renderings are representations of voxel data viewed from arbitrary angles, with transparency determined by the intensity values. Isosurfaces are meshes of triangles describedExamples of isosurfaceand orthoslices by a list of vertices. Orthoslices are orthogonal planes that cut through an image volume. Each of these entities, or objects, has a number of properties, such as transparency, color, and a unique name. The 3D scene that hosts all objects has the following properties: a zooming level, an origin of viewer coordinates, and a scene background.
  10. 10. IMPLEMENTATION For efficient rendering in computergraphics, we chose Java 3D: a low-levelhardware-accelerated software library.Java 3D has the further advantage ofbeing implemented for Java, thus enablingJava applications like ImageJ tointeroperate with the graphics card of acomputer, via either OpenGL or DirectXlow-level native layer.
  11. 11. Introduction to Java 3D nomenclature
  12. 12. JAVA 3D Java 3D provides a collectionof object templates, referred to asclasses, each of which represents anode of the scene graph. The mostrelevant classes are:BranchGroup : A node capable of being theroot of several subtrees.TransformGroup : A node that transformsthe spatial representation of its enclosed subtree.Switch : A node to toggle on and off thevisibility of its subtree nodes.
  13. 13. JAVA 3DShape3D : A node representing a displayableobject. The visualization of a Shape3D is specifiedby its Appearance and Geometry.Geometry : Defines the geometry of thecorresponding Shape3D, i.e. its vertexcoordinates.Appearance : Defines several attributes of aShape3D regarding its visualization, such ascolor, transparency and texture.
  14. 14. JAVA 3D – IJ3D Our library is composed ofabout ten different modules. Wereview here the core module, namedij3d, which interacts with all otherservice modules. The core ij3d modulecontains two principal classes, namelyContent and Image3DUniverse.
  15. 15. JAVA 3D – IJ3D The Content class is a high-level representation of a single elementin the 3D scene, such as a volume rendering or an isosurface. The Image3DUniverse (1) represents the 3D scene itself; (2) providessimplified access for adding, editing, selecting and removing Content instances toand from the scene, and (3) controls the view transform that represents zoomingand panning. Via its superclass DefaultAnimatableUniverse, theImage3DUniverse also provides methods for 3D animation and recordingmovies. In addition to data elements, the 3D scene can also contain analyticalelements such as annotations in the form of named landmark points. These areadded either interactively, or programmatically by accessing a Content instance.
  16. 16. JAVA 3D – IJ3D As mentioned above, all elements of the 3D scene are related in a graphstructure. Our constructed Java 3D graph links image objects (as Contentinstances) by wrapping them in ContentNode objects. The latter extend thefunctionality of basic Java 3D BranchGroup class, to serve as high-level sceneelements. The ContentNode class is abstract; the four classes VoltexGroup,MeshGroup, OrthoGroup and CustomNode respectively represent volumerenderings, surface renderings, orthoslices and custom geometries.
  17. 17. RESULTS The 3D scene The 3D scene is a virtual 3D space in which image volumes andmeshes are displayed. Biological image volumes in the form of stacks of 2Dimages are shown within the 3D space in one of three ways: as a volumerendering, a mesh, or an orthoslice set. Volume rendering is a technique for displaying image volumes directly. An arbitrarily-oriented image volume is projected to the screen with a transfer function such that dark pixels are more transparent than bright pixels.
  18. 18. RESULTSThe 3D sceneMeshes are constructed by applying the marchingcubes algorithm to image volumes to find a surfacethat encloses all pixels above a desired thresholdvalue. Orthoslices represent three perpendicular and adjustable planes that cut through the volume.
  19. 19. RESULTSThe 3D scene The 3D scene is capable of simultaneously hosting multiple image volumes, meshes and orthoslice sets. Each represented image Remember volume has several adjustable these attributes such as color, three from transparency and a local 3D an earlier slide? transformation.
  20. 20. RESULTSThe Toolbar ImageJs toolbar offers a collection of region of interest (ROI)tools. Closed ROIs, like rectangles, ellipses and polylines are used forinteracting with image volumes. The point tool adds 3D landmarks, whichare represented as small spheres.
  21. 21. RESULTS Volume Editing Programmatically, our libraryprovides access to the values of all voxels inan image volume. Changes to voxel valuesare propagated to the screen. Additionally, volume editing ispossible interactively: The representation ofan image stack in a 3D window enables 2D Animated simulation of dendritic growth. Four frames ofregions of interest to be projected across a time sequence, depicting a simulation of dendritic growth in the Drosophila thorax. Gray background, a volume renderingarbitrary axes of the volume. of the thorax. Dendrites are shown in red. Each time frame was generated by directly editing voxels in an image volume, which automatically updates the 3D scene and renders the frame.
  22. 22. RESULTS Volume Editing This enables what we refer to as"3D cropping", which consists of setting allvoxels in the image volume that fall within theprojected 2D region of interest to a particularcolor, typically black. We use 3D cropping to removearbitrary parts of an image volume to inspect Animated simulation of dendritic growth. Four frames of a time sequence, depicting a simulation of dendritic growth inregions deep into the volume the Drosophila thorax. Gray background, a volume rendering of the thorax. Dendrites are shown in red. Each time frame was generated by directly editing voxels in an image volume, which automatically updates the 3D scene and renders the frame.
  23. 23. RESULTSAnnotation in 3D Space The 3D scene can display landmark annotations for each image volume. These are added using the point tool. Existing landmarks are listed in a table that allows the manipulation of their properties, such as name and color. Each image volume hosted in the 3D scene may have an associated set of 3D landmarks of this type. A set of landmarks may be stored in a file for analysis, and reloaded in subsequent annotation sessions.
  24. 24. RESULTSLandmark-based 3D rigidregistration of image volumes (a) Two unaligned Drosophila brains. The reference brain is shown in green, the model brain (the one to be registered) is shown in magenta. (b) Landmark selection in the model brain. The display mode is automatically changed to orthoslices. The user can scroll through the image planes and select landmarks, Two sets of homonymous landmarks which are displayed in a separate dialog window. Landmarks are shown yellow in the figure. They can be moved,positioned over two corresponding image highlighted and renamed. (c) Subsequent landmark selection of the template brain. (d) The landmarks are used to infer a rigid transformation which aligns the model image to thevolumes can be used for estimating landmark image.a rigid transformation model. Using thismodel, one image volume can be alignedonto the other. The "Transform" menu offers optionsfor exporting the transformed image volumeas an image stack suitable for furtherprocessing with ImageJ.
  25. 25. RESULTSAnimation and recording The 3D viewer offers an option to record a 360-degree rotation of any 3D scene. Additionally, a recording mode is available. When this is activated, every manual rotation, translation and zooming of the display or any of its elements is recorded; when stopped, the recording is displayed as an ImageJ stack. Recordings may be output as videos via ImageJ.
  26. 26. RESULTSCustom content Beyond the three basic image volume display types (volume rendering, mesh and orthoslice set), the 3D scene accepts custom- crafted meshes. These meshes are typically generated programmatically, such as by automatic segmentation of image stacks.
  27. 27. RESULTS4D Visualization Time-lapse recordings of 3D data sets can be loaded andvisualized in the 3D scene. Standard command buttons for play, pause,fast-forward, etc. control the time point displayed in the viewer. Interactivezooming, rotation and panning are enabled as the time sequenceprogresses. When paused, the visualization of the current time point may beannotated, interacted with and measured as with any other 3D scene.
  28. 28. RESULTSExample applications.(a) An MRI image of a human head,demonstrating the volume editing capabilities: A 2D ROI wasprojected onto the data, and the intersected volume was filledwith black. Afterwards the head was rotated to show theeffect.(b) Visualization of the output of a segmentation algorithm.The segmented image is a confocal stack of an adultDrosophila brain. Shown here is the right optic lobe. Thesegmentation surface (in red) resembles the boundaries ofthe medulla and lobula.(c) Snapshot of the Simple Neurite Tracer application,featuring the central compartments of the adult Drosophilabrain. The intensity image is displayed as a volume rendering(gray). The protocerebral bridge and the fan-shaped body areshown as surface renderings (cyan &yellow). The traced neuraltracts are displayed as custom meshes (magenta &green).(d) Visualization of multiple view registration computed fromcorrespondences of fluorescent beads. Multiple views wereobtained by SPIM imaging of a Drosophila embryo. Beads arerendered as point meshes
  29. 29. RESULTSUsage as a GUI application Our 3D visualization library includes a fully-functional plugin for ImageJ named "3D Viewer". The plugin is listed automatically in ImageJs plugin menus. When executed, the plugin creates a new 3D scene, and automatically offers a dialog for displaying any open image stack as an image volume. The dialog provides the means to alter the attributes of the image volume, such as its representation type (volume rendering, isosurface (mesh) or orthoslices), and its color and transparency settings. The menu of the 3D scene window offers options for inserting further image volumes and editing, annotating and transforming them.
  30. 30. Usage as a programming library
  31. 31. APIUsage as a programminglibrary Our framework exposes a public API to allow applications tointegrate its features. A basic example demonstrates the use-case ofvisualizing in 3D an image volume and a mesh (see below). The exampleillustrates the development of an image segmentation algorithm, whichextracts the boundary of the structures of interest as surfaces andrepresents them as a mesh composed of triangles. First, the image volume is rendered as orthoslices. Then thesurface is displayed.
  32. 32. API Usage as a programming library The first step is to instantiate an object of the class Image3DUniverse.Then we call its show() method, which creates a window to interact with the3D scene. The scene graph is setup automatically. Image3DUniverse univ = new Image3DUniverse(640, 480);; Next, the image volume is loaded. We display it as orthoslices in the3D scene by calling the addOrthoslice() method: ImagePlus imp = IJ.openImage("flybrain. tif"); Content c = univ.addOrthoslice(imp);
  33. 33. API Usage as a programming library Alternatively, instead of addOrthoslice(), addVoltex() oraddMesh() could be used to display the image as a volume or isosurfacerendering, respectively. If we assume that there exists an external methodcreateVertices()that creates a list of points describing the vertices of thesurface, and that three consecutive vertices define a triangle, the followingsource code shows how to create a custom triangle mesh and add it to thescene: List<Point3f> vertices = createVertices(); CustomMesh cm = new CustomTriangleMesh(vertices); univ.addCustomMesh(cm, "triangle mesh");
  34. 34. APISample source code(1 of 2)
  35. 35. APISample source code(2 of 2)
  36. 36. Discussion and Conclusion
  37. 37. Discussion Numerous ImageJ-based applications currently use our 3D visualizationlibrary. The Simple Neurite Tracer is an ImageJ plugin for semi-automated tracing of neurons in 3D image data. This plugin demonstrates how an analytical tool for measuring complex 3D structures can be augmented with 3D visualization capabilities to display those objects. An algorithm has been developed for registering images of a 3D sample,where each image volume represents a different angle of view obtained by Single PlaneIllumination Microscopy. Our library enabled the algorithm developers to generate therequired visualizations with very little effort.
  38. 38. Discussion TrakEM2 is an ImageJ plugin forvisualization, analysis, segmentation, reconstructionand registration of very large 3D image data setsobtained by serial section electron microscopy. Thehigh-performance requirements of TrakEM2 droveimplementation of parallel processing strategies forisosurface extraction and mesh composition in the 3Dscene. The interaction of our library with other software packages, each withspecific requirements, promotes the development of new features and improvesperformance. These improvements then propagate back and enhanceother ImageJ applications.
  39. 39. Conclusion In this paper, we introduced a new high-level 3D visualizationframework for ImageJ. The framework provides an interactive 3D scenefor image volume visualization, annotation, segmentation andtransformation. For programmers, it offers the means to trivially augmentthe capabilities of their custom applications with hardware-accelerated3D visualization. The framework has been very well received by the ImageJ userand developer community, and is currently in use by numerous ImageJ-based applications.