The document provides information about NanoFocus AG's μsurf confocal microscopy technology.
[1] The μsurf technology uses confocal microscopy to rapidly acquire 3D topography, roughness, and thickness measurements with nanometer-scale precision within seconds.
[2] It operates by focusing light through a multi-pinhole disk and objective lens onto the sample surface, and only light that is in focus reaches the CCD camera to create an image. Scanning the pinhole disk enables full surface scanning.
[3] The technology conforms to international surface metrology standards and is used for applications like quality control, research and development, and medical device manufacturing across various industries.
The document discusses photographic principles, including the evolution of cameras from pinhole cameras to modern digital SLRs. It covers key camera components like lenses, shutters, and sensors. Exposure is controlled through the aperture, shutter speed, and ISO. Lenses use different focal lengths to capture different angles of view. Autofocus works by comparing the contrast between adjacent pixels to achieve focus. Image stabilization compensates for angular and shift camera shake. Intelligent recognition allows cameras to detect faces and scenes. A variety of equipment is needed for taking, processing, and developing photos.
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Jia-Bin Huang
This document summarizes a research paper that presents a new algorithm for detecting foreground objects and moving shadows in surveillance videos. The algorithm uses Gaussian mixture models to learn pixel-based models of cast shadows on background surfaces over time. However, learning pixel-based models can be slow if motion is infrequent. To address this, the algorithm also builds a global shadow model that uses global-level information to help update the local shadow models more quickly. Foreground objects are modeled using nonparametric density estimation of spatial and color information. Finally, background, shadow, and foreground models are combined in a Markov random field energy function that can be efficiently optimized using graph cuts to perform foreground-shadow segmentation. Experimental results demonstrate the effectiveness of the proposed
This document describes a new method for georegistering and stabilizing aerial video over mountainous terrain using LIDAR data. The method registers images to high-resolution digital elevation models by generating predicted images from the DEM and sensor model, registering these to the actual images, and correcting the sensor model. Examples show the method stabilizes shaky video, tracks moving objects, produces orthorectified video draped over DEMs, and aligns video and thermal infrared mosaics with map graphics in Google Earth. The method processes images in about 1 second and achieves absolute geolocation accuracy of 1-2 meters.
This document presents a summary of a research paper on shape from focus. Shape from focus is a technique that uses differences in focus levels across a series of images to obtain depth information and reconstruct the 3D shape of an object. The paper develops a sum-modified Laplacian (SML) operator to provide local measures of image focus quality. The SML operator is applied to images captured at different focus levels to determine focus measures. A depth estimation algorithm then interpolates the focus measures to obtain accurate depth estimates for each point. Results show the SML operator provides robust focus measures and the overall shape from focus approach can effectively reconstruct shapes, making it suitable for challenging visual inspection problems.
1. Ramesh Raskar discusses his research in computational photography and creating new types of cameras that go beyond traditional camera capabilities.
2. The goal is to develop imaging platforms that have a deeper understanding of the visual world than humans by capturing and analyzing more information.
3. Examples of this research include cameras that can capture light fields and refocus images after capture, cameras that can remove motion blur in a single photo, and techniques for capturing high-speed motion with imperceptible tags.
This document summarizes a method for acquiring stereo image pairs with pixel-accurate ground truth correspondence information using structured light. The method involves projecting patterns of structured light onto a scene using one or more light projectors while capturing images using a pair of cameras. By decoding the projected light patterns, each pixel can be uniquely labeled, allowing trivial determination of correspondences between camera views. The structured light patterns help overcome limitations of existing stereo datasets in evaluating stereo matching algorithms.
This document discusses applications of fiber optics and holography. It covers fiber optics topics like types of optical fibers, numerical aperture, acceptance angle, and attenuation in optical fibers. For holography, it discusses the basic principles, construction and reconstruction of holograms, and applications like holographic data storage, digital holography, use in banknotes, and holographic art.
The document discusses photographic principles, including the evolution of cameras from pinhole cameras to modern digital SLRs. It covers key camera components like lenses, shutters, and sensors. Exposure is controlled through the aperture, shutter speed, and ISO. Lenses use different focal lengths to capture different angles of view. Autofocus works by comparing the contrast between adjacent pixels to achieve focus. Image stabilization compensates for angular and shift camera shake. Intelligent recognition allows cameras to detect faces and scenes. A variety of equipment is needed for taking, processing, and developing photos.
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Jia-Bin Huang
This document summarizes a research paper that presents a new algorithm for detecting foreground objects and moving shadows in surveillance videos. The algorithm uses Gaussian mixture models to learn pixel-based models of cast shadows on background surfaces over time. However, learning pixel-based models can be slow if motion is infrequent. To address this, the algorithm also builds a global shadow model that uses global-level information to help update the local shadow models more quickly. Foreground objects are modeled using nonparametric density estimation of spatial and color information. Finally, background, shadow, and foreground models are combined in a Markov random field energy function that can be efficiently optimized using graph cuts to perform foreground-shadow segmentation. Experimental results demonstrate the effectiveness of the proposed
This document describes a new method for georegistering and stabilizing aerial video over mountainous terrain using LIDAR data. The method registers images to high-resolution digital elevation models by generating predicted images from the DEM and sensor model, registering these to the actual images, and correcting the sensor model. Examples show the method stabilizes shaky video, tracks moving objects, produces orthorectified video draped over DEMs, and aligns video and thermal infrared mosaics with map graphics in Google Earth. The method processes images in about 1 second and achieves absolute geolocation accuracy of 1-2 meters.
This document presents a summary of a research paper on shape from focus. Shape from focus is a technique that uses differences in focus levels across a series of images to obtain depth information and reconstruct the 3D shape of an object. The paper develops a sum-modified Laplacian (SML) operator to provide local measures of image focus quality. The SML operator is applied to images captured at different focus levels to determine focus measures. A depth estimation algorithm then interpolates the focus measures to obtain accurate depth estimates for each point. Results show the SML operator provides robust focus measures and the overall shape from focus approach can effectively reconstruct shapes, making it suitable for challenging visual inspection problems.
1. Ramesh Raskar discusses his research in computational photography and creating new types of cameras that go beyond traditional camera capabilities.
2. The goal is to develop imaging platforms that have a deeper understanding of the visual world than humans by capturing and analyzing more information.
3. Examples of this research include cameras that can capture light fields and refocus images after capture, cameras that can remove motion blur in a single photo, and techniques for capturing high-speed motion with imperceptible tags.
This document summarizes a method for acquiring stereo image pairs with pixel-accurate ground truth correspondence information using structured light. The method involves projecting patterns of structured light onto a scene using one or more light projectors while capturing images using a pair of cameras. By decoding the projected light patterns, each pixel can be uniquely labeled, allowing trivial determination of correspondences between camera views. The structured light patterns help overcome limitations of existing stereo datasets in evaluating stereo matching algorithms.
This document discusses applications of fiber optics and holography. It covers fiber optics topics like types of optical fibers, numerical aperture, acceptance angle, and attenuation in optical fibers. For holography, it discusses the basic principles, construction and reconstruction of holograms, and applications like holographic data storage, digital holography, use in banknotes, and holographic art.
The document discusses computational photography and the future of cameras. It describes how cameras could encode light in time and space using coded apertures and flutter shutters to capture more information from a single photo. This would allow for features like digital refocusing and motion deblurring. It also discusses using masks inside cameras to capture 4D light field data with a 2D sensor, and how this could enable features like refocusing after the photo is taken. Finally, it proposes new types of cameras that could reconstruct 3D shape from a single photo or enable high-speed motion capture using imperceptible projected patterns.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
This is a slide for IEEE International Conference on Computational Photography (ICCP) 2016 in Northwestern University.
See for details: http://omilab.naist.jp/project/LFseg/
1. The document discusses the new Canon EOS 60D digital SLR camera. It provides details about the camera's 18-megapixel sensor, DIGIC 4 processor, and Vari-Angle LCD screen.
2. The camera allows for high-speed continuous shooting at 5.3 frames per second and includes creative filters and in-camera raw processing capabilities.
3. Additional features discussed include the camera's movie recording functions, dust removal system, quick control screen, and rugged design capable of withstanding everyday use.
Retrieving Informations from Satellite Images by Detecting and Removing ShadowIJTET Journal
In accordance with the characteristics of remote sensing images, we put forward a color intensity method of
shadow detection and removal. Some approaches for shadow detection and removal use particular color and spectral
properties of shadows. In this method, the input satellite image color plane is calculated and the values of RGB are separated.
Then the chromaticity is calculated to determine the average value of the segmented region. The Color Intensity algorithm is
adopted to remove the shadow and retrieve the corresponding information.
LLTech Light-CT comparison vs OCT and confocal microscopyLLTech
Comparaison of the LLTech's Light-CT technology versus:
- OCT (optical coherence tomography).
- confocal microscopy.
We demonstrate that LLTech Light-CT is best suite for tissue imaging at cellular level
Light is a type of electromagnetic wave that stimulates the optic nerves to create vision. It comes in a range of wavelengths from gamma rays to radio waves. For photography, the most important wavelengths are those in the visible light spectrum from 400-700nm.
When light passes from one medium to another, such as from air to glass, it changes direction in a phenomenon called refraction. The degree of refraction is indicated by the index of refraction. Dispersion occurs when the refractive index varies by wavelength, separating light into its component colors. Reflection causes a portion of the light to change direction entirely rather than refract.
Key optical concepts in photography include the optical axis that connects lens elements, paraxial
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
A maskless exposure device for rapid photolithographic prototyping of sensor ...Dhanesh Rajan
A very cost effective maskless exposure device (MED) for the fast lithographic prototyping of various layouts is presented. The device is assembled using a digital light processing projector (DLP), an optical microscope, alignment stages and a web camera. Layouts created on a computer screen can be easily transferred to substrate surfaces without using expensive photomasks and the process can be repeated by introducing new drawings on the screen. Components are tuned for a constant area of exposure and a resolution of around 20 μm is possible at the moment without using any reduction lenses. The MED has been used in patterning the surfaces of silicon, glass, metal etc. successfully. The device can be assembled using commercially available components at a very minimum cost and can be effectively used in fast prototyping applications like in MEMS, microfluidics, patterning of sensor and electrode structures.
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
High-resolution remote sensing images offer great possibilities for urban mapping. Unfortunately, shadows cast
by buildings during this some problems occurred .This paper mainly focus to get the high resolution colour
remote sensing image, and also undertaken to remove the shaded region in the both urban and rural areas. The
region growing thresholding algorithm is used to detect the shadow and extract the features from shadow region.
Then determine whether those neighbouring pixels are added to the seed points or not. In the region growing
threshold algorithm, Pixels are placed in the region based on their properties or the properties of nearby pixel
values. Then the pixels containing similar properties are grouped together and distributed throughout the image.
IOOPL matching is used for removing shadow from image. This method proves it can remove 80% shaded
region from image efficiently.
Luigi Giubbolini | 2D Image Fuzzy Deconvolution and Scattering Centre DetectionLuigi Giubbolini
A new innovative technique based on fuzzy deconvolution for scattering centre detection (F-SCD) is proposed together with its implementation in FPGA for real-time deployment in UAV and automotive collision avoidance application.
1) Light refracts when passing from one medium to another of different density, bending towards the normal. The refractive index is a ratio of light speeds and relates the angle of incidence to refraction.
2) Total internal reflection occurs at a critical angle when light passes from a dense to less dense medium.
3) A prism disperses white light into a spectrum due to different refractive indices for each color, with violet refracting most and red least.
This presentation is all about Microscope .... The miracle instrument which revolutionised the study of microbiology and Biological science . Be it Cell studies, molecule studies, pathogen studies, virology etc etc ..... All has become possible for this instrument. let us understand the functioning , applications of this instrument .
The document discusses key components and features of cameras, both film and digital, for capturing images. It covers the aperture, shutter, lenses, film/image sensor, viewfinder, and memory storage. It also discusses lighting aspects like flash, exposure, color, and white balance. Support devices like tripods and handheld use are mentioned. The advantages of RAW file format include retaining more image information and flexibility for post-processing compared to JPEG. File storage options are also reviewed.
This document provides information about Canon's EOS 5D Mark III digital SLR camera. It highlights key features such as the 22.3 megapixel full-frame CMOS sensor, DIGIC 5+ imaging processor, 61-point high-density reticular autofocus system, 6 frames per second continuous shooting, improved movie functions, dust and water resistance, and multiple exposure and HDR shooting modes. It also lists contact information for Canon representatives in various Asian countries.
This document describes a new ultrafast Diffuse Optical Tomography (DOT) technique developed for real-time in vivo brain imaging of songbirds. The technique uses an amplified ultrafast laser and single-shot streak camera to measure the time of flight of photons through brain tissue. This allows for a 3D reconstruction of brain activity from space and time sampling of the reflectance signal. Preliminary results show the brain tissue response to hypercapnia stimulations can be detected.
The document describes the MultiView 2000, a scanning probe microscope that allows for both tip and sample scanning. It has two scanning plates - one for the tip and one for the sample - allowing flexibility in experimental setup. Modes include near-field optical microscopy, atomic force microscopy, and confocal microscopy. Resolution is below 5nm laterally and 1nm vertically. It can image a variety of samples and integrate with optical microscopes.
The document provides an overview of microscopy, including definitions, the historical background, key variables, and types of microscopes. It describes the compound microscope's structure and functions, including the ocular lens, body tube, nose piece, objectives, stage, diaphragm, illumination, and controls. The document also discusses magnification, resolution, numerical aperture, aberrations, Kohler illumination, and provides examples of different microscope types.
The document discusses light field and coded aperture cameras. It describes the Stanford plenoptic camera which uses a microlens array to sample individual rays of light, capturing 14 pixels per lens. An alternative approach is a mask-based light field camera that uses a narrowband cosine mask to sample a coded combination of rays. This heterodyne approach captures half the brightness but avoids wasting pixels and issues with lens array alignment. The document outlines how such cameras can digitally refocus images and increase depth of field. It also discusses using the Fourier transform to compute a 4D light field from 2D photos captured with a mask.
1. Ramesh Raskar is an associate professor at the MIT Media Lab researching computational photography.
2. Raskar discusses three levels of computational photography - epsilon, coded, and essence photography. Coded photography uses single or few snapshots but introduces reversible encoding of light through techniques like coded exposure and coded apertures.
3. Examples of coded photography techniques presented include flutter shutter motion deblurring, coded aperture defocus, optical heterodyning for lightfield or wavefront sensing, and using a coded glare mask. The goal is to create new imaging capabilities beyond what is possible with traditional cameras.
This document discusses compressive displays and related technologies for reducing the bandwidth requirements of multi-view and light field displays. It describes several technologies including layered 3D displays, polarization field displays, and high-rank 3D displays that decompose 4D light fields into lower dimensional representations. It also discusses using mathematical techniques like non-negative matrix factorization for further compressing display data. The document promotes open collaboration through the proposed Compressive Display Consortium to advance next generation displays.
The document discusses computational photography and the future of cameras. It describes how cameras could encode light in time and space using coded apertures and flutter shutters to capture more information from a single photo. This would allow for features like digital refocusing and motion deblurring. It also discusses using masks inside cameras to capture 4D light field data with a 2D sensor, and how this could enable features like refocusing after the photo is taken. Finally, it proposes new types of cameras that could reconstruct 3D shape from a single photo or enable high-speed motion capture using imperceptible projected patterns.
This document discusses various optical and technical aspects of camera lenses, including:
1) It defines focal length as the distance between a lens and the point where light passing through converges, known as the focal point. Shorter focal lengths provide wide-angle views while longer focal lengths provide magnified close-up views.
2) F-number and f-stop are defined, with f-number indicating the maximum light a lens can admit and f-stop indicating light levels at smaller iris openings. Smaller f-numbers and f-stop numbers admit more light.
3) The relationship between aperture, focal length, and depth of field is explained. Smaller apertures provide deeper depth of field while
This is a slide for IEEE International Conference on Computational Photography (ICCP) 2016 in Northwestern University.
See for details: http://omilab.naist.jp/project/LFseg/
1. The document discusses the new Canon EOS 60D digital SLR camera. It provides details about the camera's 18-megapixel sensor, DIGIC 4 processor, and Vari-Angle LCD screen.
2. The camera allows for high-speed continuous shooting at 5.3 frames per second and includes creative filters and in-camera raw processing capabilities.
3. Additional features discussed include the camera's movie recording functions, dust removal system, quick control screen, and rugged design capable of withstanding everyday use.
Retrieving Informations from Satellite Images by Detecting and Removing ShadowIJTET Journal
In accordance with the characteristics of remote sensing images, we put forward a color intensity method of
shadow detection and removal. Some approaches for shadow detection and removal use particular color and spectral
properties of shadows. In this method, the input satellite image color plane is calculated and the values of RGB are separated.
Then the chromaticity is calculated to determine the average value of the segmented region. The Color Intensity algorithm is
adopted to remove the shadow and retrieve the corresponding information.
LLTech Light-CT comparison vs OCT and confocal microscopyLLTech
Comparaison of the LLTech's Light-CT technology versus:
- OCT (optical coherence tomography).
- confocal microscopy.
We demonstrate that LLTech Light-CT is best suite for tissue imaging at cellular level
Light is a type of electromagnetic wave that stimulates the optic nerves to create vision. It comes in a range of wavelengths from gamma rays to radio waves. For photography, the most important wavelengths are those in the visible light spectrum from 400-700nm.
When light passes from one medium to another, such as from air to glass, it changes direction in a phenomenon called refraction. The degree of refraction is indicated by the index of refraction. Dispersion occurs when the refractive index varies by wavelength, separating light into its component colors. Reflection causes a portion of the light to change direction entirely rather than refract.
Key optical concepts in photography include the optical axis that connects lens elements, paraxial
This document provides definitions and explanations of various optical terminology related to light passing through a lens, including:
- Dispersion, refraction, diffraction, reflection, focal point, focal length, principal point, image circle, aperture ratio, numerical aperture, optical axis, and more. It discusses concepts such as entrance pupil, exit pupil, angular aperture, and how they relate to lens performance. The document also covers topics like vignetting, the cosine law, and flare. Overall, it serves as a comprehensive reference for understanding optical and photographic lens terminology.
A maskless exposure device for rapid photolithographic prototyping of sensor ...Dhanesh Rajan
A very cost effective maskless exposure device (MED) for the fast lithographic prototyping of various layouts is presented. The device is assembled using a digital light processing projector (DLP), an optical microscope, alignment stages and a web camera. Layouts created on a computer screen can be easily transferred to substrate surfaces without using expensive photomasks and the process can be repeated by introducing new drawings on the screen. Components are tuned for a constant area of exposure and a resolution of around 20 μm is possible at the moment without using any reduction lenses. The MED has been used in patterning the surfaces of silicon, glass, metal etc. successfully. The device can be assembled using commercially available components at a very minimum cost and can be effectively used in fast prototyping applications like in MEMS, microfluidics, patterning of sensor and electrode structures.
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
High-resolution remote sensing images offer great possibilities for urban mapping. Unfortunately, shadows cast
by buildings during this some problems occurred .This paper mainly focus to get the high resolution colour
remote sensing image, and also undertaken to remove the shaded region in the both urban and rural areas. The
region growing thresholding algorithm is used to detect the shadow and extract the features from shadow region.
Then determine whether those neighbouring pixels are added to the seed points or not. In the region growing
threshold algorithm, Pixels are placed in the region based on their properties or the properties of nearby pixel
values. Then the pixels containing similar properties are grouped together and distributed throughout the image.
IOOPL matching is used for removing shadow from image. This method proves it can remove 80% shaded
region from image efficiently.
Luigi Giubbolini | 2D Image Fuzzy Deconvolution and Scattering Centre DetectionLuigi Giubbolini
A new innovative technique based on fuzzy deconvolution for scattering centre detection (F-SCD) is proposed together with its implementation in FPGA for real-time deployment in UAV and automotive collision avoidance application.
1) Light refracts when passing from one medium to another of different density, bending towards the normal. The refractive index is a ratio of light speeds and relates the angle of incidence to refraction.
2) Total internal reflection occurs at a critical angle when light passes from a dense to less dense medium.
3) A prism disperses white light into a spectrum due to different refractive indices for each color, with violet refracting most and red least.
This presentation is all about Microscope .... The miracle instrument which revolutionised the study of microbiology and Biological science . Be it Cell studies, molecule studies, pathogen studies, virology etc etc ..... All has become possible for this instrument. let us understand the functioning , applications of this instrument .
The document discusses key components and features of cameras, both film and digital, for capturing images. It covers the aperture, shutter, lenses, film/image sensor, viewfinder, and memory storage. It also discusses lighting aspects like flash, exposure, color, and white balance. Support devices like tripods and handheld use are mentioned. The advantages of RAW file format include retaining more image information and flexibility for post-processing compared to JPEG. File storage options are also reviewed.
This document provides information about Canon's EOS 5D Mark III digital SLR camera. It highlights key features such as the 22.3 megapixel full-frame CMOS sensor, DIGIC 5+ imaging processor, 61-point high-density reticular autofocus system, 6 frames per second continuous shooting, improved movie functions, dust and water resistance, and multiple exposure and HDR shooting modes. It also lists contact information for Canon representatives in various Asian countries.
This document describes a new ultrafast Diffuse Optical Tomography (DOT) technique developed for real-time in vivo brain imaging of songbirds. The technique uses an amplified ultrafast laser and single-shot streak camera to measure the time of flight of photons through brain tissue. This allows for a 3D reconstruction of brain activity from space and time sampling of the reflectance signal. Preliminary results show the brain tissue response to hypercapnia stimulations can be detected.
The document describes the MultiView 2000, a scanning probe microscope that allows for both tip and sample scanning. It has two scanning plates - one for the tip and one for the sample - allowing flexibility in experimental setup. Modes include near-field optical microscopy, atomic force microscopy, and confocal microscopy. Resolution is below 5nm laterally and 1nm vertically. It can image a variety of samples and integrate with optical microscopes.
The document provides an overview of microscopy, including definitions, the historical background, key variables, and types of microscopes. It describes the compound microscope's structure and functions, including the ocular lens, body tube, nose piece, objectives, stage, diaphragm, illumination, and controls. The document also discusses magnification, resolution, numerical aperture, aberrations, Kohler illumination, and provides examples of different microscope types.
The document discusses light field and coded aperture cameras. It describes the Stanford plenoptic camera which uses a microlens array to sample individual rays of light, capturing 14 pixels per lens. An alternative approach is a mask-based light field camera that uses a narrowband cosine mask to sample a coded combination of rays. This heterodyne approach captures half the brightness but avoids wasting pixels and issues with lens array alignment. The document outlines how such cameras can digitally refocus images and increase depth of field. It also discusses using the Fourier transform to compute a 4D light field from 2D photos captured with a mask.
1. Ramesh Raskar is an associate professor at the MIT Media Lab researching computational photography.
2. Raskar discusses three levels of computational photography - epsilon, coded, and essence photography. Coded photography uses single or few snapshots but introduces reversible encoding of light through techniques like coded exposure and coded apertures.
3. Examples of coded photography techniques presented include flutter shutter motion deblurring, coded aperture defocus, optical heterodyning for lightfield or wavefront sensing, and using a coded glare mask. The goal is to create new imaging capabilities beyond what is possible with traditional cameras.
This document discusses compressive displays and related technologies for reducing the bandwidth requirements of multi-view and light field displays. It describes several technologies including layered 3D displays, polarization field displays, and high-rank 3D displays that decompose 4D light fields into lower dimensional representations. It also discusses using mathematical techniques like non-negative matrix factorization for further compressing display data. The document promotes open collaboration through the proposed Compressive Display Consortium to advance next generation displays.
The document discusses the science and techniques of photogrammetry. Photogrammetry involves deriving precise 3D coordinates of points by viewing an area from two angles and mathematically intersecting converging lines in space. It allows for the creation of accurate 3D models, textured models, and dense surface models from photographs for applications like measurements, visualization, and meshing. The process involves camera calibration, data acquisition through stereo or all-directional photography, feature marking, orientation, idealization, point cloud generation, meshing, surface generation, texturing, and exporting the 3D data.
This 3-sentence summary provides an overview of the key points about the online Christian filmmaking class:
The PowerPoint presentation is designed to be taught live online through Google Hangouts and covers topics like camera equipment, lighting, sound recording, and cinematography to teach filmmaking skills for Christian films. The class is taught by instructor Norton Rodriguez and interested students should contact TheGodofmoses@gmail.com for more information or to schedule a private Google Hangouts session for the online class.
This document summarizes Ramesh Raskar's work on coded computational photography. It describes using coded exposure to enable motion deblurring from a single photo in 2006. It also describes using a coded aperture to enable full resolution digital refocusing from a single photo in 2007 and using it for glare reduction in 2008. Additionally, it discusses using optical heterodyning to capture a 4D light field from a 2D sensor and single photo in 2007, as well as coding illumination and spectrum for applications like motion capture and acquiring an agile wavelength profile. The document outlines a progression from epsilon to coded to essence photography.
The document discusses light field imaging principles and applications. It covers how light field cameras capture information about the direction of light rays in a scene to allow refocusing and changing perspectives in images. Applications discussed include virtual and augmented reality displays, as light field techniques can help reduce issues like vergence-accommodation conflict. It also describes research areas like improving light field storage and representation, capturing light fields with camera arrays, using microlens arrays in plenoptic cameras, and developing light field processing and rendering methods.
The document provides technical specifications for the Nikon D3X digital SLR camera. It lists the camera's type, image sensor, image size, lens mount, autofocus system, metering, shutter, sensitivity, continuous shooting, image stabilization, LCD monitor, video, flash, and battery. The D3X has a 24.5 megapixel FX-format CMOS image sensor, 51-point autofocus system, shutter speed up to 1/8,000 second, ISO 100-1600 sensitivity, 5 frames per second continuous shooting, in-camera image stabilization, 3-inch LCD monitor, and compatibility with Nikon's Creative Lighting System.
Computer Graphics Modelling and Rendering reportHugo King
The document describes Hugo King's computer graphics modelling and rendering project. It summarizes his processes for modelling stairs, a door, chairs, and a bottle in 3D Studio Max. For each object, Hugo details the modelling approach, camera angle, texturing, lighting, and rendering techniques used. The goal was to achieve photo-realistic results for 4 scenes using modelling, textures, lighting, and rendering software.
This document summarizes a research paper on Video Based Human Interaction (VBHI) using a 4D Touchpad. It discusses how VBHI uses computer vision to track hand gestures as input in a region of interest, rather than global user tracking. The 4D Touchpad allows for intuitive gesture inputs in 3D space plus time. It works by using stereo cameras and a projector to project an interface onto a table, then recognizes gestures like flipping or twisting based on their spatiotemporal signatures. The 4D Touchpad provides a natural gesture language for interaction without devices like a mouse.
This document provides an overview of night vision technology. It discusses the history of night vision beginning in Germany in the 1930s. It describes how night vision works using either thermal imaging or image enhancement to detect infrared light. The document outlines the different generations of night vision devices and their improvements. It lists common night vision equipment like scopes, goggles, and cameras. Applications of night vision technology include military, hunting, surveillance, and automobiles. The future of night vision may allow sharing images between devices over long distances.
- The document describes a method for using image processing software to automate and improve the precision of focusing in femtosecond direct laser writing. Images of laser light reflected off glass samples at different positions relative to the focal plane were analyzed. Signature intensity and area patterns were used to locate the focal plane within 500nm accuracy. Future work includes using this method to assist with additional laser writing processes and surface profiling applications.
The document discusses the history and evolution of camera technology from the camera obscura to modern digital cameras. It describes early devices like the pinhole camera and box camera that utilized film. The first digital camera was introduced by Sony in 1981. Key developments included the first digital SLR by Kodak in 1991 and improvements in image sensor technology using CCD and CMOS sensors. The document also covers factors like image resolution and sensor size that impact image quality. While digital cameras are now common, some professionals still prefer film for its wide exposure latitude and image quality.
Confocal microscopy is a noninvasive imaging technique that enables high-resolution analysis of ocular surface microstructure. It was invented in 1957 and works by illuminating samples point-by-point and rejecting out-of-focus light to generate optical sections. There are various types including tandem scanning, scanning slit, and laser scanning confocal microscopes. Clinical applications include imaging corneal layers, assessing wound healing after refractive surgery, and diagnosing infections. It allows evaluation of conditions like diabetic keratopathy by quantifying changes in subbasal nerve fibers. Advanced techniques like second harmonic generation further study corneal collagen architecture.
The document describes a method for creating panoramic images from video frames. Key steps include camera calibration to determine intrinsic parameters, feature detection and matching between frames using SIFT or Shi-Tomasi features, selecting key frames when sufficient camera movement is detected, and stitching the key frames onto a cylindrical projection to create the panorama. Experimental results show Shi-Tomasi with optical flow is faster than SIFT with FLANN for feature matching.
This document provides an overview of computer graphics systems and models. It discusses the applications of computer graphics, including display, design, simulation, and user interfaces. It then describes the basic components of a graphics system, including the processor, memory, frame buffer, and input/output devices. Several camera models are introduced, including the pinhole camera and synthetic camera model. The document also discusses graphics application programming interfaces, the modeling-rendering paradigm, and the geometric pipeline for computer graphics processing.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
2. Application area of the µsurf Technology
Structure depth
1 mm
µsurf –
areal
100 µm confocal
microscopy
Geometry
10 µm
1 µm
100 nm
Roughness Flatness
10 nm
1 nm
1 µm 10 µm 100 µm 1 mm 10 mm 100 mm
Structure width
2
3. Industrial metrology for laboratory and production
The robust µsurf sensor technol-
ogy is based on the CMP tech-
nology (Confocal-Multi-Pinhole)
of NanoFocus.
Within seconds, topography,
roughness and layer thickness
in the micro and nanometer
ranges are acquired.
3
4. Technology
1. Principle
The confocal microscope from NanoFocus comprises
an LED light source (1), a rotating multi-pinhole disc ➊
(2), an objective lens with a piezo drive (3) and a CCD
camera (5).
The LED source is focused through the mulit-pinhole
disc (MPD) and the objective lens on to the sample
surface (4), which reflects the light. The reflected light
is reduced by the pinhole of the MPD to that part ➎
which is in focus, and this falls on the CCD camera.
➋
The sketch shows a point on the surface being imaged
through one pinhole of the MPD. The MPD has a large
number of such holes arranged in a special pattern.
The rotation of the MPD enables seamless scanning of
the entire surface of the sample within the image field. ➊ LED source
➋ Multi-pinhole disc
➌
➌ Objective lens
➍ Sample
➎ CCD camera
➍
2. The Confocal Curve
An image from a conventional optical microscope
contains sharp and blurred detail.InIn contrast, lightthe
focus – reflected in is captured
by the CCD camera.
confocal image, the blurred detail (unfocused) is filtered
out by the operation of the MPD.
Microscope image (left): out of focus
Confocal Curve:
Only light from the focal plane reaches the CCDlight is captured
In focus – reflected camera, points the focal range comes the precision
from
displayed as well as in focus
of the height values.
by the CCD camera. points. Confocal image (right): Only
with the intensity following the confocal curve. Thus the Objective lens
in focus points displayed.
confocal microsocope is capable of high resolution in
the nanometer range. The precision of the height values Confocal Curve:
Out of focus
from the focal range comes the precision
follows from the focal range of the confocal curve. Focal planes of the height values.
Objective lens
Sample FWHM
Height z
Out of focus – reflected light in focus
In focus – reflected light is captured is masked out. Out of focus
by the CCD camera. Focal planes
FWHM
Sample FWHM
Height z
Confocal Curve: Out of focus
Out of focus – reflected light in focus
from the focal range comes the precision
is masked out.
of the height values. Objective lens
Objective lens FWHM
Signal strength I
Out of focus
Out of focus
Focal planes Objective lens
Focal planes
Sample FWHM Sample
Signal strength I
Height z
Out of focus – reflected light in focus
is masked out.
Focal planes
FWHM
Sample
4 Out of focus
Objective lens
5. 3. Confocal Image Stack
Each confocal image is a horizontal slice through the topography of the sample. Capturing of images at
different focal heights produces a stack of such images, achieved by the confocal microscope throught
precise vertical displacement of the objective lens by means of a piezo drive. 200 to 400 confocal images are
generally captured within a few seconds, after which the software reconstructs an exact three-dimensional
height image from the stack of confocal images. Therefore, service heights are measured, not calculated.
Height z
Measurement process: Image stack: Results:
The objective lens is dis- Up to 1,000 individual The surface topography
placed in height by a piezo confocal images is reconstructed from the
drive. are captured. confocal images.
4. Quantitative analysis and reports
The measured values are transferred to the analysis program containing standardize analysis methods,
such as ISO 25178 or ISO 4287 roughness (2D and 3D parameters), form, translucent film thickness and
microgeometry. Results are automatically transferred to a pre-formatted report.
Color-scaled height display Single profile from the measurement Automatically produced
measurement report
5
6. Exact and reliable 3D
measurement data
The µsurf technology offers various advantages for Additionally, in an independent third party study
the characterization of technical surfaces in the micro performed by the National Institute of Standards and
and nanometer ranges. Technology (NIST), comparing intererometry, laser
confocal, and NanoFocus’ Multi-Pinhole Confocal,
Both measurements of an electron beam structured the µsurf technology acheived the highest correlation
roll, one done with the µsurf technology and the other (99 %) to tactile systems. With such high correlation to
a SEM, have comparable resolution. Contrary to the traditional methods, historical data on 2D parameters
SEM (x,y), µsurf offers the data in true 3-dimensional is no longer obsolete. All the while acquiring surface
coordinates (x,y,z). Only with this quantitative data data a lot faster, without damaging the measurement
can an exact analysis of 3D surface parameters be sample.
performed, delivering a larger range of information
about the surface texture.
Comparison of a SEM and a µsurf confocal microscope
REM µsurf
Structured Surface (100×90 µm), REM: Alcan Research Center, Neuhausen.
Correlation of tactile systems and the µsurf confocal microscope
1
Sign. A
(µm)
tactile
0
-1
1
Sign. B
(µm)
µsurf
0
-1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3
Profile length (µm)
99% correlation (KKF)
Best system in the comparison study at NIST, 2005
6
7. Meaningful measurement results
within seconds
The µsurf systems do not require preparation of charts, profiles and 3D pictures. With the automation
the surface of the measurement sample. When the software µsoft automation, measurements can be
sample is positioned under the objective, the clearly processed at a high throughput rate with highest
arranged user interface leads the operator through the efficiency and without user influence.
measurement process in just a few clicks. Afterwards,
the settings of the measurement parameters can be
used for serial measurements. Already after a few
seconds the µsurf technology provides meaningful
data for further analysis.
Contrary to other techniques,
the µsurf technology delivers
When measurement data is imported into the report meaningful measurements
and analyses in a few
templates of the µsoft analysis software, a full meas- seconds. Easy handling and
urement report is automatically generated. The output intelligent software guarantee
maximum efficiency.
analysis parameters can be modified and displayed as
7
8. Robust precision metrology
for industrial applications
The µsurf technology was developed especially for Of the many that employ the µsurf technology, Thyssen
use in the typically rough production environment. The Krupp Steel AG, a customer and research partner of
robust design and features, such as integrated spring NanoFocus AG, trusts in the µsurf at their Rolling Mills
feet, guarantee reliable and correct measurement to get repeatable measurements on the fly.
results. Environmental influences, such as vibrations,
dust or splash water, do not affect the precision or From steel to solar, medicine to machine tools,
accuracy. automotibiles to airplanes, electronics to optics, manu-
facturers, printers, forensic specialists, researchers,
Historical use of the µsurf technology, deployed in and physicians worldwide trust in NanoFocus µsurf
support of high-paced production inspection, has confocal technology to get them precise, accurate,
forged its reputation as a tool to produce high precision and reliable data everytime.
measurements repeatedly in less-than-ideal conditions.
8
9. Hightech – Made in Germany
Made in Germany
The components used for
NanoFocus designs, develops, and produces complete NanoFocus measurement
systems have to go through
3D measurement solutions, hardware and software. strict selection and validation
processes to guarantee
highest quality levels.
The components that the µsurf systems are built
from, at the manufacturing facility in Oberhausen,
Germany, are of the highest quality standards. They
must be low-maintenance and durable, such as the
high performance LED used as the light source. Com-
ponents like these guarantee the economic usage of
a measurement system that is always ready to use.
9
10. Measurement and analysis conforming
to international standards
Standards are catalysts for innovations. One example
are the EN ISO 25178 standards.
Finally, this norm made a broad range of new analyses
possible that describe the surface characteristics and
their functions much more precisely. That´s why
NanoFocus continuously implements new standards
in the measurement systems and the software.
For NanoFocus it is not only important to design
measurement solutions conforming to international
standards. NanoFocus also uses its experts’ know-
ledge in the field of optical metrology to actively
support the development of new standards. Jet
another technical advantage of NanoFocus systems.
Conformity of the µsurf technology and software to international standards
µsurf
ISO 25178-6, ISO 5436-1, VDI 2655-1.2a
µsoft control (control and measurement software)
ISO 11562, ISO 4287 ISO 4288, ISO 5436-1, ISO 5436-2
,
µsoft analysis (analysis software)
EUR 15178 EN, ISO 1101, ISO 11562, ISO 12085, ISO 12181-1, ISO 12181-2, ISO 12780-1, ISO 12780-2,
ISO 12781-1, ISO 12781-2, ISO 1302, ISO 13565-1, ISO 25178-2, ISO 25178-6, ISO 4287 ISO 4288,
,
ISO 5436-1, ISO 5436-2, ISO/TS 16610-1, ISO/TS 16610-20, ISO/TS 16610-22, ISO/TS 16610-31,
ISO/TS 16610-40, ISO/TS 16610-41, ISO/TS 16610-49
10
11. Product groups
Produktkategorien
Standard
Standard Modular
The complete package The individual concept
NanoFocus´ standard systems, µsurf basic, µsurf Some measurement tasks are that complex, that
explorer and µsurf mobile, prove that the entrance they require a customized measurement system. The
Modular
into the micro and nanotechnology realm does not Integration
broad range of standardized components, as well as
have to be expensive and complicated. Unpack, plug the experience and competence of our hardware and
in, measure. It has never been that easy to perform software experts, allow NanoFocus to equip the µsurf
Standard
three dimensional analyses into the nanometer ranges. custom for specific requirements.
Produktkategorien
Integration
The µsurf standard systems are used in various indus- Business Solution
The custom systems are especially used for extremely
tries: to measure the roughness of medical blood pumps, challenging and specialized surfaces as well as in basic
for production control of electronic components for the research and in medical device manufacturing, where
Modular
automotive sector, the supporting production of complex Standard
strict norms have to be met.
electronic modules, the inspection of thinnest layers,
Business Solution
in metal processing or the paper and print industry. Dienstleistung
Integration Modular
Dienstleistung
Business Solution Integration
The fully developed, industry-specific solution Benefit from technical advance
Increasingly more industries request measurement The sensors and measurement heads of the µsurf
systems that meet the exacting requirements of their systems can be fully integrated in production processes.
Dienstleistung
products as well as their development and produc- Business Solution
Many industry partners of NanoFocus use the technol-
tion processes. NanoFocus´ business solutions offer ogy to equip their products with best components for
just this: industry-specific solutions, developed with three dimensional surface analysis. Besides metrology
key customers in those respective markets. That´s and medical technology, it is especially the forensic
how industry related knowledge and fully developed sector that is utilizing the µsurf sensors. For example,
NanoFocus technology are combined in the business Dienstleistung
the world’s leading company for forensic analyzing
solutions. The result is a system that is able to perform systems uses NanoFocus measuring heads for the
industry-specific and complex measurement tasks close precise analysis of bullets.
to production without investing in further customized
design.
One of these solutions is µsurf cylinder, which is used by
Applications of µsurf systems sorted by measurement tasks
premium car manufacturers worldwide for development
and production control of energy efficient and wear- µsurf basic
reduced engine cylinders. With the business solution µsurf explorer
µsurf solar, development processes of solar cells are µsurf custom, µsurf cylinder, µsurf solar
sped up, production processes are verified and the µsurf sensor
performance and quality of solar cells are improved Process development (R&D) Process control Production control
continuously.
11
12. 3D measurement system for
laboratory and production
surf explorer
lexible allround metrology
F
solution
Compact design
User-friendly concept
µsurf explorer is a flexible and user-friendly 3D measurement system for precise surface
analyses that can be used in measurement and testing laboratories, as well as in produc-
tion environments. The µsurf explorer, awarded for its compact design, delivers reliable 3D
measurement results quick and uncomplicated in a few steps.
Dental implant ground surface Step height measurement (50 nm)
12
13. Ready for use everywhere
surf mobile
Mobile usage with battery
Only 5.5 kg
Motorized xyz axis
The compact and portable µsurf mobile was developed for surface measurements on large
objects such as printing rolls and body parts. In a few minutes, the five kilogram, hand-carried
system is ready to operate, allowing for large-scale measurements on rolls along the radius
of curvature.
EDT roll Half tone printing roll Gravure printing roll
13
14. 3D microscopy for industrial research
surf basic
Turreted optics
NEW
75 mm height range
opographical 3D view in
T
real color
µsurf basic is a 3D microscope optimized for the requirements of industrial and industry-oriented
research. The system compels through its high measurement speed and its flexibility, especially
when there are a lot of different measurement tasks to accomplish with various requirements.
Screw thread Rubbing wear Resistor
14
15. Measurement system made-to-order
surf custom
Modular concept
Fully automatable
For intricate surfaces
µsurf custom is specifically designed to the requirements of an individual measurement task.
A large assortment of hardware and software components enables universal laboratory measure-
ment systems, as well as fully automated systems for quality control. Even with smallest height
structures, measurement results with nanometer accuracy are guaranteed – within seconds.
Implant Sensor membrane Micro lenses
15
16. Specialized solution for motor cylinders
surf cylinder
Angulated optics
or cylinder bores from
F
70 mm to 165 mm dia.
ariable measurement
V
positioning (radial, axial)
µsurf cylinder is conceived to measure cylinder running surfaces in automotive industry. The
angulated optics dive into the cylinder bore, as controlled by a joystick. Every position in the
bore is accessable without destroying the engine. For repeating measurements and serial
inspections, automated measurement protocols are stored in a database. With the Linewalking
system as an additional automation solution, measurement processes can be performed even
more effectively and efficiently, with the track system attached to the engine block.
Honed structure AluSil surface Coated piston
16
17. Allround solution for solar cells
surf solar
Up to 12 area measurements
in 1 minute
Simple and intuitive
automation
µsurf solar is a high precision optical measurement solution for the broad range of solar
applications in laboratory and production. With portal configuration available up to the meter
range, whole thin film solar modules can be measured. A vacuum chuck for safe fixture and
the integration of specialized algorithms for better analysis of anti-reflecting surfaces guarantee
optimal measurement results. With the integrated automation function, measurement and
analysis cycles can be programmed quickly and easily.
Laser Scribes
Pyramid structure Finger measurement
17
18. OEM solution to integrate easily
surf sensor
Simple integration
Individual designs possible
Ready to use immediately
The confocal µsurf area sensor is the heart of the µsurf 3D technology. It can be integrated
separately in production machines and analysis systems. Soft interfaces, in the form of software
development kits, enable the complete integration in a superior software solution.
Lasered structrure Milled structure Structure of a painting
18
19. Software
soft control soft analysis
With the easy-to-use software, µsoft control, measure- µsoft analysis is a comprehensive software package
ment data can be analyzed and visualized quickly. The for 2D and 3D surface analysis. The software always
user interface is separated in three areas, providing contains the latest standards and filter functions. The
a useful overview. The measurement assistant that versions Standard, XT and Premium are available based
controls the system is already integrated in µsoft on analysis demands.
control. From measuring to analyzing, only one
program is needed.
soft automation Stitch-Tool
With the µsoft automation, individual measurements In some cases, the measurement field of an objec-
and analyses can be automated in an easy way. All tive does not suffice for the evaluation of spacious
chosen parameters are saved in a measurement characteristics such as form, roughness or waviness.
template, so the user can start a measurement cycle To extend the total measurement area, neighboring
with one click. The measurement data is transferred measurement fields can be assembled to one suture-
to a database that forms the interface between the less measurement.
inspection system and a customized µsoft automation
analytics module.
19
20. 07/07/2010. Technical data subject to change without notice. NanoFocus and μsurf are registered trademarks of NanoFocus AG. Design: nicolaygrafik.de
Are you interested in other NanoFocus-Technology?
Please call us +49 208 62 000-0 or write an email to sales@nanofocus.de.
NanoFocus AG
Lindnerstrasse 98 | 46149 Oberhausen | Phone +49 208-62 000-0 | Fax +49 208-62 000-99 | info@nanofocus.de | www.nanofocus.de
NanoFocus, Inc.
Dr. Christian M. Wichern | Innsbrook Corporate Center 4470 Cox Road, Suite 250 | Glen Allen, Virginia 23060
Phone ++1 804-228-4195 | Fax ++1 804-527-1816 | solutions@nanofocus-us.com
NanoFocus Pte. Ltd.
Mr. Alan Ong | 5012, Ang Mo Kio Avenue 5, #05-06F, Techplace 2 | Singapore 569876
Phone ++65 96849735 | alan.ong@nanofocus-ag.com