Neuron Analysis Workshop: Neuron Tracing from Tissue Specimens at the MicroscopeMBF Bioscience
This presentation is from a workshop about neuron reconstruction. It reviews the process of neuron tracing directly from tissue specimens at the microscopy to study neuron morphology.
WormLab is software that automatically detects and tracks worms in video sequences. It uses thresholding, geometric modeling, and multiple hypothesis tracking to detect worms despite background clutter and complex shapes. It measures metrics like length, speed, and bending and can track behaviors like reversals and omega bends. WormLab supports various video formats and export of tracking data and video overlays.
This document discusses WormLab software for automatically detecting and tracking C. elegans worms. It describes challenges with manual and existing automatic tracking methods. WormLab uses thresholding, geometric modeling and multiple hypothesis tracking to detect worms in complex backgrounds and conformations. It quantifies metrics like speed, direction and bending. WormLab exports data and can control cameras to record videos for analysis.
3D imaging uses rotating X-ray beams to generate multiplanar and 3D surface rendered images, providing higher sensitivity than 2D imaging. 3D imaging allows isolated visualization of anatomical structures without overlap and provides anatomically accurate images that can be manipulated from various angles. Image reconstruction in CBCT involves acquiring projection images from multiple angles, preprocessing the data, filtering it using mathematical algorithms, and backprojecting the data to reconstruct axial slice images. Artifacts like beam hardening can be reduced using advanced reconstruction algorithms that correct for the hardening effect during iterations.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Interactive Full-Body Motion Capture Using Infrared Sensor Network ijcga
The document describes a new technique for interactive full-body motion capture using multiple infrared sensors. It processes data from each sensor independently and then combines the results to enhance flexibility and accuracy. The method aims to maintain real-time performance while improving on issues like limited actor orientation, inaccurate joint tracking, and conflicting data from individual sensors.
Proton computed tomography (pCT) is a new imaging technique that uses protons instead of photons to create images with improved accuracy over standard x-ray CT. pCT reduces radiation dose and range errors compared to x-ray CT, which is especially important for imaging near vital organs. The presented detector for pCT was built through collaboration and tracks proton trajectories using a fiber tracker and calculates residual energy using a scintillator stack to estimate proton path lengths and densities for improved treatment planning and delivery of proton therapy. Testing of the detector is ongoing to quantify the benefits of pCT compared to standard x-ray CT.
This document discusses spectral imaging techniques. It begins by describing the spectral data cube and how it can be obtained through spatial scanning with a 2D sensor or spectral scanning. It then covers various multiplexing techniques like image slicers that allow obtaining the spectral data cube instantaneously. Diffractive and computational imaging spectrometers are presented as ways to achieve snapshot spectral imaging. Applications discussed include white balancing, tracking, analyzing paintings, and satellite-based remote sensing.
Neuron Analysis Workshop: Neuron Tracing from Tissue Specimens at the MicroscopeMBF Bioscience
This presentation is from a workshop about neuron reconstruction. It reviews the process of neuron tracing directly from tissue specimens at the microscopy to study neuron morphology.
WormLab is software that automatically detects and tracks worms in video sequences. It uses thresholding, geometric modeling, and multiple hypothesis tracking to detect worms despite background clutter and complex shapes. It measures metrics like length, speed, and bending and can track behaviors like reversals and omega bends. WormLab supports various video formats and export of tracking data and video overlays.
This document discusses WormLab software for automatically detecting and tracking C. elegans worms. It describes challenges with manual and existing automatic tracking methods. WormLab uses thresholding, geometric modeling and multiple hypothesis tracking to detect worms in complex backgrounds and conformations. It quantifies metrics like speed, direction and bending. WormLab exports data and can control cameras to record videos for analysis.
3D imaging uses rotating X-ray beams to generate multiplanar and 3D surface rendered images, providing higher sensitivity than 2D imaging. 3D imaging allows isolated visualization of anatomical structures without overlap and provides anatomically accurate images that can be manipulated from various angles. Image reconstruction in CBCT involves acquiring projection images from multiple angles, preprocessing the data, filtering it using mathematical algorithms, and backprojecting the data to reconstruct axial slice images. Artifacts like beam hardening can be reduced using advanced reconstruction algorithms that correct for the hardening effect during iterations.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Interactive Full-Body Motion Capture Using Infrared Sensor Network ijcga
The document describes a new technique for interactive full-body motion capture using multiple infrared sensors. It processes data from each sensor independently and then combines the results to enhance flexibility and accuracy. The method aims to maintain real-time performance while improving on issues like limited actor orientation, inaccurate joint tracking, and conflicting data from individual sensors.
Proton computed tomography (pCT) is a new imaging technique that uses protons instead of photons to create images with improved accuracy over standard x-ray CT. pCT reduces radiation dose and range errors compared to x-ray CT, which is especially important for imaging near vital organs. The presented detector for pCT was built through collaboration and tracks proton trajectories using a fiber tracker and calculates residual energy using a scintillator stack to estimate proton path lengths and densities for improved treatment planning and delivery of proton therapy. Testing of the detector is ongoing to quantify the benefits of pCT compared to standard x-ray CT.
This document discusses spectral imaging techniques. It begins by describing the spectral data cube and how it can be obtained through spatial scanning with a 2D sensor or spectral scanning. It then covers various multiplexing techniques like image slicers that allow obtaining the spectral data cube instantaneously. Diffractive and computational imaging spectrometers are presented as ways to achieve snapshot spectral imaging. Applications discussed include white balancing, tracking, analyzing paintings, and satellite-based remote sensing.
Interactive full body motion capture using infrared sensor networkijcga
Traditional motion capture (mocap) has been
well
-
stud
ied in visual science for
the last decades
. However
the fie
ld is mostly about capturing
precise animation to be used in
specific
application
s
after
intensive
post
processing such as studying biomechanics or rigging models in movies. These data set
s are normally
captured in complex laboratory environments with
sophisticated
equipment thus making motion capture a
field that is mostly exclusive to professional animators.
In
addition
, obtrusive sensors must be attached to
actors and calibrated within t
he capturing system, resulting in limited and unnatural motion.
In recent year
the rise of computer vision and interactive entertainment opened the gate for a different type of motion
capture which focuses on producing
optical
marker
less
or mechanical sens
orless
motion capture.
Furtherm
ore a wide array of low
-
cost
device are released that are easy to use
for less mission critical
applications
.
This paper
describe
s
a new technique of using multiple infrared devices to process data from
multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap
using commodity
devices such as Kinect
. The method involves analyzing each individual sensor
data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from
sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasize
s on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability
The document discusses light field imaging principles and applications. It covers how light field cameras capture information about the direction of light rays in a scene to allow refocusing and changing perspectives in images. Applications discussed include virtual and augmented reality displays, as light field techniques can help reduce issues like vergence-accommodation conflict. It also describes research areas like improving light field storage and representation, capturing light fields with camera arrays, using microlens arrays in plenoptic cameras, and developing light field processing and rendering methods.
This document describes a new technique called Photonic Nanojet Interferometry (PJI) that uses microspheres to achieve super-resolution in 3D label-free imaging. PJI was able to laterally resolve sub-100nm features on a Blu-ray disc by taking advantage of photonic nanojets from microspheres. PJI measurements of a Blu-ray disc agreed with measurements from atomic force microscopy and scanning electron microscopy, validating that PJI can achieve super-resolution below the diffraction limit.
This document provides an overview of specialized radiographic techniques used in dentistry, including computed tomography (CT), cone beam computed tomography, magnetic resonance imaging (MRI), nuclear medicine imaging, and ultrasonography. It describes the basic principles and applications of each technique, highlighting their benefits and limitations. The key points are that CT provides cross-sectional images to evaluate pathology while reducing superimposition, CBCT allows faster 3D imaging at lower radiation dose than traditional CT, MRI uses magnetic fields to image soft tissues without ionizing radiation, nuclear medicine involves injecting radiotracers to examine tissue function, and ultrasound uses sound waves to image structures in real-time without radiation.
Digital imaging involves converting analog x-ray signals into digital images. This document discusses various digital imaging receptors and techniques. CCD and CMOS detectors convert x-ray exposure into electric signals. DSR produces images of changes by subtracting baseline images from follow-up images. PSP plates use stimulated luminescence to form digital images. CBCT and CT use x-rays to create 3D volumetric images but CBCT has lower radiation dose. MRI uses strong magnetic fields and radio waves to form images based on the magnetic properties of hydrogen atoms and does not use radiation. Each technique has advantages and limitations for various dental and medical applications.
This document provides an overview of cone-beam computed tomography (CBCT) imaging. It discusses the principles of CBCT imaging, including how CBCT uses a rotating x-ray source and detector to obtain multiple 2D images that are reconstructed into a 3D volume. It describes the components of image production, clinical considerations for CBCT scans, common artifacts, and applications of CBCT imaging such as implant planning and assessment of pathology. In conclusion, CBCT is presented as an effective diagnostic imaging technique that produces high resolution 3D images of the maxillofacial region at lower radiation doses and costs compared to medical CT.
This study introduces a computational technique called deconvolution to improve images captured using a widefield fluorescence microscope. Skin and endothelial cell samples stained with fluorescent dyes were imaged with a widefield microscope. Deconvolution algorithms were then applied to remove blur from the images by accounting for factors like the microscope point spread function. The deconvolved images showed enhanced contrast and resolution compared to the raw images, demonstrating deconvolution can help overcome some limitations of widefield microscopy by reducing out-of-focus light. However, confocal microscopy is still needed for thicker samples above 20-30 micrometers.
This white paper discusses optimizing image quality and dose when transitioning from computed radiography (CR) to digital radiography (DR). Key factors discussed include:
1) DR detector panel technologies like cesium iodide needle phosphors can improve image quality and reduce radiation dose compared to CR powder phosphors.
2) Pixel size, fill factor, and readout electronics influence image noise, with smaller pixel sizes and higher fill factors improving quality.
3) Image processing software, grid selection and alignment, collimation, and exposure monitoring are important to maximize the benefits of DR.
4) Case studies show transitioning to DR can increase productivity and patient throughput while achieving dose reduction goals. Proper
Introduction to Medical Imaging, Basics of Medical Imaging, Fundamentals of Digital Image Processing, First chapter of Digital Image Processing Book by Rafael C. Gonzalez.
This document summarizes a study that estimated and removed scatter and anti-scatter grid line artifacts from images of anthropomorphic head phantoms taken with a high resolution detector and stationary anti-scatter grid. Scatter profiles were estimated by imaging the phantoms with lead markers and interpolating grayscale values under the markers. Iteratively modifying the scatter profiles minimized structured noise across the field of view, almost totally eliminating grid line artifacts. Images before and after correction showed improved contrast and contrast-to-noise ratio, demonstrating that computational tools can correct for grid artifacts even in dynamic imaging sequences.
1) An experiment compared the effectiveness of an anti-scatter grid when used with a high-resolution CMOS detector (Dexela 1207 with 75 micron pixels) versus a flat panel detector (FPD, Paxscan 2020 with 194 micron pixels).
2) When the grid was used, contrast improved for both detectors but the contrast-to-noise ratio (CNR) did not increase as much for the Dexela due to a substantial increase in total noise compared to the FPD.
3) The increased noise for the Dexela was caused by higher fixed pattern noise from the grid lines, as the quantum noise increase from radiation attenuation should have been similar for both detectors. Without
This document discusses stereoscopy, which refers to the illusion of depth perception achieved through binocular vision of two slightly different images. It provides an overview of stereoscopy techniques including anaglyph and polarized methods. Applications of stereoscopy in medicine are described, particularly in ophthalmology, mammography, and vascular imaging. The document also discusses techniques and applications of stereoscopy in dentistry, including for implant planning, orthodontics, and localization of teeth or root fragments. Both advantages and limitations of stereoscopic viewing technologies are presented.
This document outlines the course contents for a class on digital image processing and machine vision. The 10-week course covers topics such as image acquisition, enhancement, segmentation, feature extraction, and advanced research areas. It includes 1-3 lectures per topic. Reference materials listed are books and journals in the field. The introduction defines an image and digital image processing. It provides examples of image processing applications in medical diagnosis, industrial uses, security, biometrics, and more. Key components of machine vision systems and a comparison of human and machine vision are also summarized.
The document discusses using coded masks and modulation techniques to capture light field information and enable digital refocusing and 6D displays with a single 2D sensor. It proposes placing a coded mask in front of the sensor to heterodyne the light field and extract its 4D information. Several applications are mentioned, including coded illumination for motion capture, a 6D display using spatial and illumination variation, and a light field camera that can digitally refocus using a single photograph.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a research paper that proposes a computer-aided diagnosis (CAD) system for detecting lung cancer nodules from chest CT images using support vector machines (SVM). The CAD system involves 5 main phases: 1) image pre-processing using total variation denoising, 2) lung region segmentation using optimal thresholding and morphological operations, 3) feature extraction of lung nodules using gray level co-occurrence matrix (GLCM) texture analysis, 4) SVM classification of nodules as benign or malignant, 5) output of classification results. The goal is to develop an automated CAD system to assist radiologists in early detection of lung cancer from CT images.
This document provides information about confocal microscopy. It discusses:
- How confocal microscopy works by excluding light from out-of-focus planes to generate high-contrast images with better resolution than conventional microscopes.
- The history of confocal microscopy, which was pioneered by Marvin Minsky in 1955 using pinholes and point-by-point illumination.
- Key aspects of confocal microscopy like using fluorophores, laser excitation, and building 3D images by combining thin optical sections.
The document summarizes a project where an industrial robot named Baxter, made by Rethink Robotics, plays pool using its vision sensors and inverse kinematics. The key steps involved finding the desired orientation to strike the cue ball into the pocket using either a 3D sensor or Baxter's head camera, moving Baxter's arm to that orientation, then using visual servoing with Baxter's hand camera to align the end effector with the center of the ball while maintaining orientation. Once aligned, Baxter would linearly move its end effector to strike the ball. Some limitations were the workspace was limited due to Baxter's fixed position, and insufficient force was achieved on the ball. Future work proposed making Baxter mobile and using linear actuators to increase striking force
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
This document provides an introduction to microscopes. It discusses the history of microscopes beginning with Anton van Leeuwenhoek in the 16th century being the first to observe microorganisms. It then describes the basic parts of a classical/light microscope including the ocular lens, stage, objectives, condenser, and illuminator. It also discusses magnification, resolution, working distance, and different types of microscopy including bright field, dark field, phase contrast, and fluorescence microscopes. The document explains how light interacts with lenses and specimens to produce microscope images.
The document discusses light field and coded aperture cameras. It describes the Stanford plenoptic camera which uses a microlens array to sample individual rays of light, capturing 14 pixels per lens. An alternative approach is a mask-based light field camera that uses a narrowband cosine mask to sample a coded combination of rays. This heterodyne approach captures half the brightness but avoids wasting pixels and issues with lens array alignment. The document outlines how such cameras can digitally refocus images and increase depth of field. It also discusses using the Fourier transform to compute a 4D light field from 2D photos captured with a mask.
Creating 3D neuron reconstructions from image stacks and virtual slidesMBF Bioscience
This presentation was given at a workshop that focused on reconstructing neurons from image stacks to study neuron morphology. It covers strategies for capturing image stacks optimized for neuron reconstruction with Neurolucida 360, a new software product that makes it much easier to trace neurons from image stacks.
The document provides an introduction to computer vision. It discusses key topics including:
- What computer vision is and why it is useful. It uses mathematical and computational tools to extract information from images and improve human vision.
- Some basic concepts in computer vision including digital images, sampling, noise removal, segmentation, and feature extraction techniques.
- Where computer vision is used such as healthcare, autonomous vehicles, augmented/virtual reality, industry, social media, security, agriculture, and fashion.
- A brief history of computer vision including classical approaches and the revolution enabled by advances in artificial intelligence and deep learning.
Interactive full body motion capture using infrared sensor networkijcga
Traditional motion capture (mocap) has been
well
-
stud
ied in visual science for
the last decades
. However
the fie
ld is mostly about capturing
precise animation to be used in
specific
application
s
after
intensive
post
processing such as studying biomechanics or rigging models in movies. These data set
s are normally
captured in complex laboratory environments with
sophisticated
equipment thus making motion capture a
field that is mostly exclusive to professional animators.
In
addition
, obtrusive sensors must be attached to
actors and calibrated within t
he capturing system, resulting in limited and unnatural motion.
In recent year
the rise of computer vision and interactive entertainment opened the gate for a different type of motion
capture which focuses on producing
optical
marker
less
or mechanical sens
orless
motion capture.
Furtherm
ore a wide array of low
-
cost
device are released that are easy to use
for less mission critical
applications
.
This paper
describe
s
a new technique of using multiple infrared devices to process data from
multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap
using commodity
devices such as Kinect
. The method involves analyzing each individual sensor
data, decompose and rebuild
them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of
captured signal from
sensor. Each sensor operates on its own process and communicates through MPI.
Our method emphasize
s on the need of minimum calculation overhead for better real time performance
while being able to maintain good scalability
The document discusses light field imaging principles and applications. It covers how light field cameras capture information about the direction of light rays in a scene to allow refocusing and changing perspectives in images. Applications discussed include virtual and augmented reality displays, as light field techniques can help reduce issues like vergence-accommodation conflict. It also describes research areas like improving light field storage and representation, capturing light fields with camera arrays, using microlens arrays in plenoptic cameras, and developing light field processing and rendering methods.
This document describes a new technique called Photonic Nanojet Interferometry (PJI) that uses microspheres to achieve super-resolution in 3D label-free imaging. PJI was able to laterally resolve sub-100nm features on a Blu-ray disc by taking advantage of photonic nanojets from microspheres. PJI measurements of a Blu-ray disc agreed with measurements from atomic force microscopy and scanning electron microscopy, validating that PJI can achieve super-resolution below the diffraction limit.
This document provides an overview of specialized radiographic techniques used in dentistry, including computed tomography (CT), cone beam computed tomography, magnetic resonance imaging (MRI), nuclear medicine imaging, and ultrasonography. It describes the basic principles and applications of each technique, highlighting their benefits and limitations. The key points are that CT provides cross-sectional images to evaluate pathology while reducing superimposition, CBCT allows faster 3D imaging at lower radiation dose than traditional CT, MRI uses magnetic fields to image soft tissues without ionizing radiation, nuclear medicine involves injecting radiotracers to examine tissue function, and ultrasound uses sound waves to image structures in real-time without radiation.
Digital imaging involves converting analog x-ray signals into digital images. This document discusses various digital imaging receptors and techniques. CCD and CMOS detectors convert x-ray exposure into electric signals. DSR produces images of changes by subtracting baseline images from follow-up images. PSP plates use stimulated luminescence to form digital images. CBCT and CT use x-rays to create 3D volumetric images but CBCT has lower radiation dose. MRI uses strong magnetic fields and radio waves to form images based on the magnetic properties of hydrogen atoms and does not use radiation. Each technique has advantages and limitations for various dental and medical applications.
This document provides an overview of cone-beam computed tomography (CBCT) imaging. It discusses the principles of CBCT imaging, including how CBCT uses a rotating x-ray source and detector to obtain multiple 2D images that are reconstructed into a 3D volume. It describes the components of image production, clinical considerations for CBCT scans, common artifacts, and applications of CBCT imaging such as implant planning and assessment of pathology. In conclusion, CBCT is presented as an effective diagnostic imaging technique that produces high resolution 3D images of the maxillofacial region at lower radiation doses and costs compared to medical CT.
This study introduces a computational technique called deconvolution to improve images captured using a widefield fluorescence microscope. Skin and endothelial cell samples stained with fluorescent dyes were imaged with a widefield microscope. Deconvolution algorithms were then applied to remove blur from the images by accounting for factors like the microscope point spread function. The deconvolved images showed enhanced contrast and resolution compared to the raw images, demonstrating deconvolution can help overcome some limitations of widefield microscopy by reducing out-of-focus light. However, confocal microscopy is still needed for thicker samples above 20-30 micrometers.
This white paper discusses optimizing image quality and dose when transitioning from computed radiography (CR) to digital radiography (DR). Key factors discussed include:
1) DR detector panel technologies like cesium iodide needle phosphors can improve image quality and reduce radiation dose compared to CR powder phosphors.
2) Pixel size, fill factor, and readout electronics influence image noise, with smaller pixel sizes and higher fill factors improving quality.
3) Image processing software, grid selection and alignment, collimation, and exposure monitoring are important to maximize the benefits of DR.
4) Case studies show transitioning to DR can increase productivity and patient throughput while achieving dose reduction goals. Proper
Introduction to Medical Imaging, Basics of Medical Imaging, Fundamentals of Digital Image Processing, First chapter of Digital Image Processing Book by Rafael C. Gonzalez.
This document summarizes a study that estimated and removed scatter and anti-scatter grid line artifacts from images of anthropomorphic head phantoms taken with a high resolution detector and stationary anti-scatter grid. Scatter profiles were estimated by imaging the phantoms with lead markers and interpolating grayscale values under the markers. Iteratively modifying the scatter profiles minimized structured noise across the field of view, almost totally eliminating grid line artifacts. Images before and after correction showed improved contrast and contrast-to-noise ratio, demonstrating that computational tools can correct for grid artifacts even in dynamic imaging sequences.
1) An experiment compared the effectiveness of an anti-scatter grid when used with a high-resolution CMOS detector (Dexela 1207 with 75 micron pixels) versus a flat panel detector (FPD, Paxscan 2020 with 194 micron pixels).
2) When the grid was used, contrast improved for both detectors but the contrast-to-noise ratio (CNR) did not increase as much for the Dexela due to a substantial increase in total noise compared to the FPD.
3) The increased noise for the Dexela was caused by higher fixed pattern noise from the grid lines, as the quantum noise increase from radiation attenuation should have been similar for both detectors. Without
This document discusses stereoscopy, which refers to the illusion of depth perception achieved through binocular vision of two slightly different images. It provides an overview of stereoscopy techniques including anaglyph and polarized methods. Applications of stereoscopy in medicine are described, particularly in ophthalmology, mammography, and vascular imaging. The document also discusses techniques and applications of stereoscopy in dentistry, including for implant planning, orthodontics, and localization of teeth or root fragments. Both advantages and limitations of stereoscopic viewing technologies are presented.
This document outlines the course contents for a class on digital image processing and machine vision. The 10-week course covers topics such as image acquisition, enhancement, segmentation, feature extraction, and advanced research areas. It includes 1-3 lectures per topic. Reference materials listed are books and journals in the field. The introduction defines an image and digital image processing. It provides examples of image processing applications in medical diagnosis, industrial uses, security, biometrics, and more. Key components of machine vision systems and a comparison of human and machine vision are also summarized.
The document discusses using coded masks and modulation techniques to capture light field information and enable digital refocusing and 6D displays with a single 2D sensor. It proposes placing a coded mask in front of the sensor to heterodyne the light field and extract its 4D information. Several applications are mentioned, including coded illumination for motion capture, a 6D display using spatial and illumination variation, and a light field camera that can digitally refocus using a single photograph.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a research paper that proposes a computer-aided diagnosis (CAD) system for detecting lung cancer nodules from chest CT images using support vector machines (SVM). The CAD system involves 5 main phases: 1) image pre-processing using total variation denoising, 2) lung region segmentation using optimal thresholding and morphological operations, 3) feature extraction of lung nodules using gray level co-occurrence matrix (GLCM) texture analysis, 4) SVM classification of nodules as benign or malignant, 5) output of classification results. The goal is to develop an automated CAD system to assist radiologists in early detection of lung cancer from CT images.
This document provides information about confocal microscopy. It discusses:
- How confocal microscopy works by excluding light from out-of-focus planes to generate high-contrast images with better resolution than conventional microscopes.
- The history of confocal microscopy, which was pioneered by Marvin Minsky in 1955 using pinholes and point-by-point illumination.
- Key aspects of confocal microscopy like using fluorophores, laser excitation, and building 3D images by combining thin optical sections.
The document summarizes a project where an industrial robot named Baxter, made by Rethink Robotics, plays pool using its vision sensors and inverse kinematics. The key steps involved finding the desired orientation to strike the cue ball into the pocket using either a 3D sensor or Baxter's head camera, moving Baxter's arm to that orientation, then using visual servoing with Baxter's hand camera to align the end effector with the center of the ball while maintaining orientation. Once aligned, Baxter would linearly move its end effector to strike the ball. Some limitations were the workspace was limited due to Baxter's fixed position, and insufficient force was achieved on the ball. Future work proposed making Baxter mobile and using linear actuators to increase striking force
We have built a camera that can look around corners and beyond the line of sight. The camera uses light that travels from the object to the camera indirectly, by reflecting off walls or other obstacles, to reconstruct a 3D shape.
This document provides an introduction to microscopes. It discusses the history of microscopes beginning with Anton van Leeuwenhoek in the 16th century being the first to observe microorganisms. It then describes the basic parts of a classical/light microscope including the ocular lens, stage, objectives, condenser, and illuminator. It also discusses magnification, resolution, working distance, and different types of microscopy including bright field, dark field, phase contrast, and fluorescence microscopes. The document explains how light interacts with lenses and specimens to produce microscope images.
The document discusses light field and coded aperture cameras. It describes the Stanford plenoptic camera which uses a microlens array to sample individual rays of light, capturing 14 pixels per lens. An alternative approach is a mask-based light field camera that uses a narrowband cosine mask to sample a coded combination of rays. This heterodyne approach captures half the brightness but avoids wasting pixels and issues with lens array alignment. The document outlines how such cameras can digitally refocus images and increase depth of field. It also discusses using the Fourier transform to compute a 4D light field from 2D photos captured with a mask.
Creating 3D neuron reconstructions from image stacks and virtual slidesMBF Bioscience
This presentation was given at a workshop that focused on reconstructing neurons from image stacks to study neuron morphology. It covers strategies for capturing image stacks optimized for neuron reconstruction with Neurolucida 360, a new software product that makes it much easier to trace neurons from image stacks.
The document provides an introduction to computer vision. It discusses key topics including:
- What computer vision is and why it is useful. It uses mathematical and computational tools to extract information from images and improve human vision.
- Some basic concepts in computer vision including digital images, sampling, noise removal, segmentation, and feature extraction techniques.
- Where computer vision is used such as healthcare, autonomous vehicles, augmented/virtual reality, industry, social media, security, agriculture, and fashion.
- A brief history of computer vision including classical approaches and the revolution enabled by advances in artificial intelligence and deep learning.
INTERACTIVE ANALYTICAL TOOL FOR CORNEAL CONFOCAL IMAGINGMadhavi Tippani
This document describes the development of an interactive analytical tool for quantitative analysis of corneal confocal imaging data. The tool was developed using MATLAB to provide a more user-friendly interface compared to an existing C++-based software. The tool allows the user to interactively select the region of interest for intensity curve calculation from corneal image stacks collected using confocal microscopy. Animal studies using rabbits and mice were conducted to test freeze injury and photorefractive keratectomy models and collect confocal image data for analysis using the new tool.
Cone beam computed tomography (CBCT) uses a cone-shaped x-ray beam projected through the area of interest and a 2D detector to acquire multiple 2D radiographic images at different angles. These images are then used to reconstruct 3D volumetric images. CBCT has applications in dentistry for implant planning, endodontics, orthodontics and TMJ imaging due to its ability to provide high contrast images of bony structures at a lower radiation dose compared to medical CT. Some limitations include artifacts from metallic restorations, lower soft tissue contrast and isotropic resolution compared to medical CT.
This document provides an overview and tutorial on various techniques for object recognition, including cascading classifiers, convolutional neural networks (CNNs), and support vector machines (SVMs). It discusses the hierarchical concept formation problem and how these techniques can help a robot learn about its environment autonomously. For each technique, it covers the underlying concepts, example implementations in OpenCV or other libraries, and plans to analyze results through confusion matrices. The document serves as an introduction for researchers or students interested in object recognition and machine learning algorithms.
This document describes a project to create a 3D visualization system for medical imaging data like CT and MRI scans. The project aims to easily visualize and interpret large amounts of imaging data for education and medical applications. The steps involve obtaining data sets, preprocessing the images, segmenting objects of interest, interpolating missing slices, and using ray casting to generate 3D surface visualizations with color. The document discusses methods like marching cubes and ray casting, presents example visualizations generated, and acknowledges resources used.
This document provides an overview of cone beam computed tomography (CBCT) including its history, components, principles, and applications in dentistry. Some key points:
- CBCT was first introduced in the 1990s and provides 3D imaging with lower radiation dose than medical CT. It works by generating a cone-shaped X-ray beam and using a detector to record attenuation data, which is then reconstructed into 3D images.
- Components include an X-ray generator, image sensor, and software for image reconstruction. Images are stored in DICOM format.
- Advantages include rapid scan time, interactive display modes, and ability to view structures in multiple planes. Disadvantages include potential artifacts and inability to view
Here are the key steps to convert a color image to a binary image in LabVIEW:
1. Read in the color image using the Read PNG or Read JPEG VI. This will return an image structure.
2. Use the Color To Gray VI to convert the color image to grayscale. This removes the color information and leaves only the luminance.
3. Apply a threshold to convert the grayscale image to binary. Use the Threshold VI and choose an appropriate threshold value (usually 128 for 8-bit images).
4. The output of the Threshold VI will be a binary image, where pixels above the threshold are white (255) and pixels below are black (0).
5. You can now process the binary
The document discusses the topics covered in the CSE 455: Computer Vision course including basics of images, color, texture, segmentation, interest operators, object recognition, tracking, content-based image retrieval, and 2D and 3D computer vision. It provides examples of medical imaging, 3D reconstruction, robotics, image databases, document analysis, video analysis, 3D scanning, and motion capture. The three stages of computer vision - low-level, mid-level, and high-level - are introduced along with goals of image analysis and basic digital image terminology.
Automatic System for Detection and Classification of Brain TumorsFatma Sayed Ibrahim
Automatic system for brain tumors detection based on DICOM MRI images
Surveying methodologies of from preprocessing to classifications
Implementing comparative study.
Proposed technique with highest accuracy and lest elapsed time.
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...Tulipp. Eu
- Computer vision has improved with more data and processing power, but global scene understanding remains challenging.
- The document proposes a multidisciplinary approach combining CNNs and human visual cognition to better model scene understanding, with the goal of applications like autonomous vehicles.
- It describes experiments observing how humans and primates recognize scenes to inform modeling, incorporating global and local descriptors with relationships. This approach aims to advance scene understanding capabilities.
Virtopsy is a non-invasive virtual autopsy technique that uses medical imaging like 3D surface scanning, CT scanning, MRI, radiography and angiography to examine a corpse without dissection. It was developed as an alternative to traditional autopsy to avoid invasiveness. Virtopsy captures detailed internal images of tissues and bones within 10 minutes using these scanning techniques and reconstruction on computers. It allows for second opinions even after cremation since findings are in a digital format. The procedure involves placing a body in a scanning machine to generate thousands of cross-sectional images which are then reconstructed into a virtual 3D image of the interior. Virtopsy is being used to determine causes of death, identity, age and aid investigations
CBCT provides volumetric imaging with less radiation than medical CT. It involves an X-ray source and detector rotating around the patient to obtain multiple 2D projections, which are reconstructed into a 3D volume. This allows visualization of structures like bone and teeth from any angle. CBCT has numerous dental and maxillofacial applications like implant planning, orthodontics, and pathology assessment. While it provides more accuracy than 2D imaging, CBCT images can be affected by artifacts from scatter, motion, and metal objects. Overall, CBCT is a useful tool for evaluating anatomy in 3D.
CyMap is a simple and versatile technology for monitoring microscopic particles or cells with a large field of view using a point light source and camera. It is small, robust, cheap, and uses tailored software for automated detection and control. The system provides portable, flexible imaging capabilities with variable field of view and resolution. It has been developed for live cell locating and tracking within incubators and other transparent vessels. The creators are seeking practical applications and ways to improve the technology, such as adding 3D imaging capabilities using two light sources.
This document discusses radiography testing principles and techniques. It describes how radiography uses X-rays to detect internal defects by passing X-rays through a material and capturing the transmitted image on film. It discusses different film and filmless techniques like computed radiography and computed tomography. It also covers topics like image quality indicators and the wide applications of radiography testing in inspecting various materials and components.
OCT provides high-resolution, cross-sectional imaging of the retina and anterior segment of the eye in a non-invasive manner. It works on the principles of interferometry and low coherence reflectometry to obtain micrometer-level resolution images. Time domain OCT uses a moving reference mirror while Fourier domain OCT obtains entire scans simultaneously using a spectrometer. OCT is useful for diagnosing and monitoring various retinal diseases like macular edema, glaucoma, age-related macular degeneration and corneal pathologies by visualizing intraretinal layers and thickness maps. It has become the gold standard for evaluation and management of diseases affecting the retina.
This document describes a study that used convolutional neural networks (CNNs) for animal classification from images. The study proposed a novel method for animal face classification using CNN features. The CNN model was trained on images to classify animals into different classes. The model achieved over 90% accuracy on the test data. The authors concluded that CNNs are well-suited for image classification tasks like animal classification due to their ability to automatically extract relevant features from images. Future work could involve classifying other objects using this deep learning approach.
Stereology is a technique for obtaining unbiased, quantitative measurements of three-dimensional structures from two-dimensional tissue sections. It involves using random geometric probes, such as points, lines, and planes, to sample sections in a systematic random manner. This avoids biases associated with non-stereological methods. Key aspects of stereology covered are: using isotropic probes to achieve sampling isotropy; the optical fractionator method and formula for estimating total object numbers; and factors that determine the volume fraction sampled, including section thickness, section sampling fraction, and area sampling fraction. Validation of results involves assessing accuracy versus precision of the estimates.
Neural networks are computing systems inspired by biological neural networks in the brain. They are composed of interconnected artificial neurons that process information using a connectionist approach. Neural networks can be used for applications like pattern recognition, classification, prediction, and filtering. They have the ability to learn from and recognize patterns in data, allowing them to perform complex tasks. Some examples of neural network applications discussed include face recognition, handwritten digit recognition, fingerprint recognition, medical diagnosis, and more.
This document discusses advances in technical aspects of deep brain stimulation surgery. It covers developments in preoperative imaging including optimized MRI sequences, stereotactic frames such as the STarFix and NexFrame systems, stereotactic atlases, planning software, and intraoperative techniques like microelectrode recordings and the use of robotics. Precise targeting and lead placement have been improved through these technical innovations.
Similar to Neuron Reconstruction and Analysis Workshop (20)
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
2. Workshop Outline
• Neurolucida manual neuronal reconstructions
• Tools for automatic neuronal reconstructions
• AutoNeuron, AutoSpine and AutoSynapse modules
• Imaging considerations
• Morphometric analysis in Neurolucida Explorer
• 3D Visualization of neuron reconstructions
• Preview of Neurolucida 360
mbfbioscience.com
3. Introduction to Neurolucida
• Reconstruction of neuronal structures
•
Quantify neuronal outgrowth in response to
growth factors, drugs, etc.
•
Calculate spine and synaptic densities
• Quantification of anatomical regions and
cells
•
Calculate volume of infarct or tumor
•
Map stem cell migration in the spinal cord
• Identification of neuronal networks and
connectivity within an anatomical region
mbfbioscience.com
6. Reconstructing Neurons Directly
from Slides
High
resolution
digital camera
Microscope
with high
quality optics
Motorized stage
focus encoder, and stage
controller
Computer with
MicroBrightField software
and video capture card
mbfbioscience.com
7. Reconstruct Neurons Directly from
Slides (cont.)
•
The full extent of the
dendrites and axons
usually extend across
multiple fields-of-view
150 serial sections
• A motorized stage
moves the specimen
when tracing outside
the field-of-view
•
The x,y,z information
is stored to create a
3D reconstruction
Courtesy of Dr. Rosa Cossart
mbfbioscience.com
8. Reconstructing from Images
•
Load 2D images, 3D image
stacks or montages into NL
for 2D or 3D reconstruction
•
Trace through the entire stack
or montage while focusing
through the stack
•
Stacks can be acquired on a
MBF system, on a confocal,
or a 2-photon scope
Image courtesy of MBF Bioscience
mbfbioscience.com
9. Image Montage Module
•
•
A number of overlapping image stacks were acquired that
need to be aligned
Image Montage Module will automatically align confocal stacks
in XYZ
Image Stacks Courtesy of Dr. Rosa Cossart
mbfbioscience.com
11. Adding Spines and Varicosities
• Marked while tracing
or once the dendrite is
reconstructed
• Use the spine toolbar
to add spines
• Use the marker tool
bar to add varicosities
or other features
mbfbioscience.com
12. Reconstructing Anatomical
Regions and Neurons
•
•
•
Trace contours across serial sections to reconstruct an
anatomical region of interest, lesions, etc.
Map neuronal projections and cells
From live video or images collected throughout the ROI
http://www.mbfbioscience.com/brain-mapping/cytoarchitectonics
mbfbioscience.com
13. Changing Tracing Colors
• Change the display of neurons, marker, and contours
• Prior to Tracing:
•
Options>Display Preferences> Neuron, Marker, or Contour
tab
mbfbioscience.com
14. Editing
•
•
While tracing, hit CTRL Z to delete the last point placed
After tracing, use the editing tool to:
•
Modify fibers:
•
•
•
•
•
•
Delete trees (fibers)
Modify thickness along the tree
Add branch points
Modify colors
Correct z errors
Modify contours and markers
•
•
•
•
Delete
Modify thickness
Resize
Modify colors
mbfbioscience.com
16. AutoNeuron module
•
•
•
Automatic reconstruction of neuronal processes and cell somas in
2D and 3D
Uses fully automatic or interactive modes
Recommend high magnification images with a small Z step
(around 0.5µm)
20mm
http://www.mbfbioscience.com/image-gallery
mbfbioscience.com
17. AutoNeuron Advanced Options
• Step 5 of the AutoNeuron workflow
• Seed detection:
•
Adjust sampling density to ensure uniform sampling
and seed coverage
• Tracing:
•
AN sets the most optimal tracing settings based on
the type of image: low magnification confocal, high
magnification confocal and brightfield
• Branch connections:
•
•
Ignore traces shorter than user defined amount
Adjust tolerance to gaps in staining
mbfbioscience.com
19. AutoSpine Module: Spine
Detection
•
Automated reconstruction of
dendritic spines
•
•
•
Dendritic branch can be traced
manually or automatically
Dendritic spines modeled as a
3D mesh using defined
parameters
Recommend high magnification
image stack with small Z step
(under 0.5µm)
Image courtesy of Dr. Jacob Jedynak
mbfbioscience.com
20. AutoSynapse Module: Putative
Synapse Detection
•
•
•
•
Putative synapes automatically
detected along a traced branch
& modeled as a 3D mesh using
defined parameters
User determines detection
distance from dendrite
Recommend high magnification
image stack with small Z step
(under 0.5µm)
Future versions will support colocalization
Images courtesy of Dr. Francisco Alvarez & Travis Rotterman
mbfbioscience.com
21. The Spine/Synapse Detector is a
Toroid
Outside radius
Inside radius
Image courtesy of Dr. Jacob Jedynak
mbfbioscience.com
22. Editing
• After tracing, use the editing
tool as you would for manual
traces
• AutoNeuron:
•
The splice tool is most often
used
• AutoSpine:
•
Delete and classify spines
• AutoSynapse
•
Delete synapses
mbfbioscience.com
23. Orthogonal View for Editing
• Displays
portion of the
image and
tracing in Z
• Make editing
complex
neurons
easier
Image courtesy of Dr. Jacob Jedynak
mbfbioscience.com
24. Imaging for Reconstruction
•
Reconstruction goals
•
What to choose:
•
•
•
At the scope or from images?
Time vs Effort
Imaging modality
•
•
Brightfield
Fluorescence
– CFM or MPFM
Improve image analysis with correct
capture and post-processing techniques.
Lateral resolution Axial resolution
Objective choice Depth of field
Step size
mbfbioscience.com
26. Axial resolution impacts reconstruction
granularity
Reconstruction courtesy of Bob Jacobs
mbfbioscience.com
27. Tips for better reconstructions
Brightfield:
•
Select:
•
•
•
•
•
•
Coverglass (#1.5)
Mounting medium
Objective
Immersion medium
Koehler Illumination
Fully open condenser
If mapping live:
• Place points often
as you focus
Image courtesy of Dan Peruzzi
If imaging:
• Use small step sizes (0.5
µm or less)
• Create a virtual tissue
mbfbioscience.com
28. Tips for better reconstructions
Fluorescent:
•
Select:
•
•
•
•
•
•
•
•
Coverglass (#1.5)
mounting medium
Objective
Immersion medium
Small step sizes (0.5 µm or less)
Create a virtual tissue for seamless
fields of view
Maximize Dynamic Range
After acquisition, deconvolve if
necessary
If using single or multiphoton microscope:
• Match the Pinhole Size for each fluorophore!
Image from Randy Bruno. Figure from Dumitru, Rodriguez and
Morrison Nat Protoc. 2011 August 25; 6(9): 1391–1411.
mbfbioscience.com
32. Data analysis
• Neuronal Analyses
• Spine Data
• Synapse Data
http://vadlo.com/cartoons.php?id=71
mbfbioscience.com
33. Neuronal Analysis
Branching analysis
•
Length per tree (dendrite/axon), per
neuron, and per branch order
Sholl Analysis
•
Calculated per tree and branch
order
Layer Analysis
•
Calculate length within cortical
layers
Branch Analysis
•
Calculate branch angles and
numbers of branch points
mbfbioscience.com
37. 3D Visualization Module
•
•
Integrated within MBF software
Display 3D rendering of objects built from
reconstructions
•
•
•
•
Rotate and zoom
Place a “skin” around wireframe and adjust opacity
Display the tracing and image data simultaneously
Save solids view as a TIFF or JPEG2000 or create an
animated movie for display (.avi)
mbfbioscience.com
39. Future Directions – Neurolucida
360 & SpineStudio
•
Partnership with
Dr. Patrick Hof
and original
developers of
Neuron Studio
•
Full 3D interactive
tracing and
editing
•
Open API for 3rd
party algorithm
plug-ins
mbfbioscience.com
40. Thanks!
National Institutes of Health
MBF Programmers, Staff, and Staff Scientists
All of you for attending our workshop
Current MBF Customers who provided the image data
NIMH grants MH076188, MH085337, MH93011
mbfbioscience.com