Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
Invited talk on AR/SLAM and IoT in ILAS Seminar :Introduction to IoT and
Security, Kyoto University, 2020.
(https://www.z.k.kyoto-u.ac.jp/freshman-guide/ilas-seminars/ )
◆登壇者: Tomoyuki Mukasa
Computational Displays in 4D, 6D, 8D
We have explored how light propagates from thin elements into a volume for viewing for both automultiscopic displays and holograms. In particular, devices that are typically connected with geometric optics, like parallax barriers, differ in treatment from those that obey physical optics, like holograms. However, the two concepts are often used to achieve the same effect of capturing or displaying a combination of spatial and angular information. Our work connects the two approaches under a general framework based in ray space, from which insights into applications and limitations of both parallax-based and holography-based systems are observed.
Both parallax barrier systems and the practical holographic displays are limited in that they only provide horizontal parallax. Mathematically, this is equivalent to saying that they can always be expressed as a rank-1 matrix (i.e, a matrix in which all the columns are linearly related). Knowledge of this mathematical limitation has helped us to explore the space of possibilities and extend the capabilities of current display types. In particular, we have designed a display that uses two LCD panels, and an optimisation algorithm, to produce a content-adaptive automultiscopic display (SIGGRAPH Asia 2010).
(Joint work with R Horstmeyer, Se Baek Oh, George Barbastathis, Doug Lanman, Matt Hirsch and Yunhee Kim) http://cameraculture.media.mit.edu
In other work we have developed a 6D optical system that responds to changes in viewpoint as well as changes in surrounding light. Our lenticular array alignment allows us to achieve such a system as a passive setup, omitting the need for electrical components. Unlike traditional 2D flat displays, our 6D displays discretize the incident light field and modulate 2D patterns in order to produce super-realistic (2D) images. By casting light at variable intensities and angles onto our 6D displays, we can produce multiple images as well as store greater information capacity on a single 2D film (SIGGRAPH 2008).
Ramesh Raskar joined the Media Lab from Mitsubishi Electric Research Laboratories in 2008 as head of the Lab’s Camera Culture research group. His research interests span the fields of computational photography, inverse problems in imaging and human-computer interaction. Recent inventions include transient imaging to look around a corner, next generation CAT-Scan machine, imperceptible markers for motion capture (Prakash), long distance barcodes (Bokode), touch+hover 3D interaction displays (BiDi screen), low-cost eye care devices (Netra) and new theoretical models to augment light fields (ALF) to represent wave phenomena.
In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. In 2009, he was awarded a Sloan Research Fellowship. In 2010, he received the Darpa Young Faculty award. He holds over 40 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on Computational Photography. http://raskar.info
Land Cover Feature Extraction using Hybrid Swarm Intelligence Techniques - A ...IDES Editor
The findings of recent studies are showing strong
evidence to the fact that some aspects of biogeography can be
applied to solve specific problems in science and engineering.
The proposed work presents a hybrid biologically inspired
technique that can be adapted according to the database of
expert knowledge for a more focused satellite image
classification. The paper also presents a comparative study of
our hybrid intelligent classifier with the other recent Soft
Computing Classifiers such as ACO, Hybrid Particle Swarm
Optimization-cAntMiner (PSO-ACO2), Fuzzy sets, Rough-
Fuzzy Tie up and the Semantic Web Based Classifiers and
the traditional probabilistic classifiers such as the Minimum
Distance to Mean Classifier (MDMC) and the Maximum
Likelihood Classifier (MLC).
Robust content based watermarking algorithm using singular value decompositio...sipij
Nowadays, image content is frequently subject to different malicious manipulations. To protect images
from this illegal manipulations computer science community have recourse to watermarking techniques. To
protect digital multimedia content we need just to embed an invisible watermark into images which
facilitate the detection of different manipulations, duplication, illegitimate distributions of these images. In
this work a robust watermarking technique is presented that embedding invisible watermarks into colour
images the singular value decomposition bloc by bloc of a robust transform of images that is the Radial
symmetry transform. Each bit of the watermark is inserted in a bloc of eight pixels large of the blue
channel a high singular value of the corresponding bloc into the radial symmetry map. We justified the
insertion in the blue channel by our feeble sensibility to perturbations in this colour channel of images. We
present also results obtained with different tests. We had tested the imperceptibility of the mark using this
approach and also its robustness face to several attacks.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
This presentation is based on the SIGGRAPH 2016 paper which published on research gate. I just prepared and studied the algorithm. https://www.researchgate.net/publication/305217688_Mapping_virtual_and_physical_reality
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
we introduce the perceptual issues relevant to seeing three dimensions in digital imagery. Technological constraints
like limited field-of-view and spatial resolution prevent the display of images that match the real world in all respects.
Therefore, only some elements of real world depth perception are utilized when viewing 3D CGI. Depth Cue Theory is the
main theory of depth perception. It states that different sources of information, or depth cues, combine to give a viewer the 3D
layout of a scene. Alternatively, the Ecological Theory takes a generalized approach to depth perception. It states that the HVS
relies on more than the image on the retina; it requires an examination of the entire state of the viewer and their surroundings
(i.e., the context of viewing). In this paper, we rely on Depth Cue Theory, although we acknowledge the importance of visual
context where appropriate. As seen later, the type of visual environment and the viewer’s task play a significant part in the
effectiveness of a 3D VDS. Both theories assert that there are some basic sources of information about 3D layout. These are
generally divided into three types: pictorial, coulometer and stereo depth cues. The perceptual process by which these cues
combine to form a sense of depth is a complicated and outdebated issue. Different approaches to measuring the ability to
perceive depth have also been posited. We discuss these issues with respect to CGI.
A Practical and Robust Bump-mapping Technique for Today’s GPUs (paper)Mark Kilgard
While per-pixel lighting with bump mapping is a standard feature of contemporary video games today, when this paper appeared in 2000, this was state-of-the-art functionality. While the explanation involving GeForce register combiners and normalization cube maps is dated, the theory and mathematics developed in the paper s
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
Land Cover Feature Extraction using Hybrid Swarm Intelligence Techniques - A ...IDES Editor
The findings of recent studies are showing strong
evidence to the fact that some aspects of biogeography can be
applied to solve specific problems in science and engineering.
The proposed work presents a hybrid biologically inspired
technique that can be adapted according to the database of
expert knowledge for a more focused satellite image
classification. The paper also presents a comparative study of
our hybrid intelligent classifier with the other recent Soft
Computing Classifiers such as ACO, Hybrid Particle Swarm
Optimization-cAntMiner (PSO-ACO2), Fuzzy sets, Rough-
Fuzzy Tie up and the Semantic Web Based Classifiers and
the traditional probabilistic classifiers such as the Minimum
Distance to Mean Classifier (MDMC) and the Maximum
Likelihood Classifier (MLC).
Robust content based watermarking algorithm using singular value decompositio...sipij
Nowadays, image content is frequently subject to different malicious manipulations. To protect images
from this illegal manipulations computer science community have recourse to watermarking techniques. To
protect digital multimedia content we need just to embed an invisible watermark into images which
facilitate the detection of different manipulations, duplication, illegitimate distributions of these images. In
this work a robust watermarking technique is presented that embedding invisible watermarks into colour
images the singular value decomposition bloc by bloc of a robust transform of images that is the Radial
symmetry transform. Each bit of the watermark is inserted in a bloc of eight pixels large of the blue
channel a high singular value of the corresponding bloc into the radial symmetry map. We justified the
insertion in the blue channel by our feeble sensibility to perturbations in this colour channel of images. We
present also results obtained with different tests. We had tested the imperceptibility of the mark using this
approach and also its robustness face to several attacks.
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
This slide is presented in CDVE2012 (The 9th International Conference on Cooperative Design, Visualization, and Engineering).
Abstract. This research presents the availability of a landscape simulation method for a mobile AR (Augmented Reality), comparing it with photo montage and VR (Virtual Reality) which are the main existing methods. After a pilot experiment with 28 subjects in Kobe city, a questionnaire about three landscape simulation methods was implemented. In the results of the questionnaire, the mobile AR method was well evaluated for reproducibility of a landscape, operability, and cost. An evaluation rated as better than equivalent was obtained in comparison with the existing methods. The suitability of mobile augmented reality for landscape simulation was found to be high.
This presentation is based on the SIGGRAPH 2016 paper which published on research gate. I just prepared and studied the algorithm. https://www.researchgate.net/publication/305217688_Mapping_virtual_and_physical_reality
Neural Scene Representation & Rendering: Introduction to Novel View SynthesisVincent Sitzmann
An overview over the neural scene representation and rendering framework and an introduction to novel view synthesis approaches. Slides made for the Eurographics, CVPR, and SIGGRAPH courses on neural rendering, connected to the state-of-the-art report on Neural Rendering at Eurographics 2020.
Feel free to re-use the slides! I just ask that you keep some form of attribution, either at the beginning of your presentation, or in the slide footer.
we introduce the perceptual issues relevant to seeing three dimensions in digital imagery. Technological constraints
like limited field-of-view and spatial resolution prevent the display of images that match the real world in all respects.
Therefore, only some elements of real world depth perception are utilized when viewing 3D CGI. Depth Cue Theory is the
main theory of depth perception. It states that different sources of information, or depth cues, combine to give a viewer the 3D
layout of a scene. Alternatively, the Ecological Theory takes a generalized approach to depth perception. It states that the HVS
relies on more than the image on the retina; it requires an examination of the entire state of the viewer and their surroundings
(i.e., the context of viewing). In this paper, we rely on Depth Cue Theory, although we acknowledge the importance of visual
context where appropriate. As seen later, the type of visual environment and the viewer’s task play a significant part in the
effectiveness of a 3D VDS. Both theories assert that there are some basic sources of information about 3D layout. These are
generally divided into three types: pictorial, coulometer and stereo depth cues. The perceptual process by which these cues
combine to form a sense of depth is a complicated and outdebated issue. Different approaches to measuring the ability to
perceive depth have also been posited. We discuss these issues with respect to CGI.
A Practical and Robust Bump-mapping Technique for Today’s GPUs (paper)Mark Kilgard
While per-pixel lighting with bump mapping is a standard feature of contemporary video games today, when this paper appeared in 2000, this was state-of-the-art functionality. While the explanation involving GeForce register combiners and normalization cube maps is dated, the theory and mathematics developed in the paper s
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
This slide is presented in CAADRIA2012 (The 17th International Conference on Computer Aided Architectural Design Research in Asia).
Abstract. The mobility of people's activities, and cloud computing technologies are becoming advanced in the modern age of information and globalisation. This study describes the availability of discussing spatial design while sharing a 3-dimensional virtual space with stakeholders in a distributed and synchronised environment. First of all, a townscape design support system based on a cloud computing type VR system is constructed. Next, an experiment of a distributed and synchronised discussion of townscape design is executed with subjects who are specialists in the townscape design field. After the experiment, both qualitative mental evaluation and quantitative evaluation were carried out. The conclusions are as follows: 1. Users who use VR frequently and who use videoconferencing consider that the difference with face-to-face discussion is small. 2. A Moiré pattern may occur in a gradation picture. 3. The availability of distributed and synchronised discussions with cloud computing type VR is high.
Visual hull construction from semitransparent coloured silhouettesijcga
This paper attempts to create coloured
semi
-
transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the f
inal projections.
We propose a method to convert coloured semi
-
transparent shadow images to a 3D visual hull
. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each vie
wpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
NUMBER PLATE IMAGE DETECTION FOR FAST MOTION VEHICLES USING BLUR KERNEL ESTIM...paperpublications3
Abstract: Many recent advancements have been introduced in both hardware and software technologies. Out of these technologies, we gain a lot of interest in Image Processing. As the eccentric identification of a vehicle, number plate is a key clue to discover theft and over-speed vehicles. The captured images from the camera are always in low resolution and suffer severe loss of edge information, which cast great challenge to existing blind deblurring methods. The blur kernel can be showed as linear uniform convolution and with angle and length estimation. In this paper, sparse representation is used to identify the blur kernel. Then, the length of the motion kernel has been estimated with Radon transform in Fourier domain. We evaluate our approach on real-world images and compare with several popular blind image deblurring algorithms. Based on the results obtained the supremacy of our proposed approach in terms of efficacy.
Keywords: Image Segmentation, Angle Estimation, Length Estimation, Deconvolution Algorithm, Artificial Neural Network.
Title: NUMBER PLATE IMAGE DETECTION FOR FAST MOTION VEHICLES USING BLUR KERNEL ESTIMATION AND ANN
Author: S.KALAIVANI, K.PRAVEENA, T.PREETHI, N.PUNITHA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured shadow images and view angles, projection information and light configurations for the final projections. We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an
arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated image that changes due to the rotated projection of the transparent visual hull.
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the final projections.
We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
Semantic mapping of road scenes, PhD thesis. The main aim of the thesis is to investigate and propose solutions to the scene understanding problem of finding 'what' objects are present in the world and 'where' are they located.
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
The 7D holographic innovation advancement is potential to change part of stimulation, instructing, learning and test of embedding the innovation in training framework. Holography is the art of making visualizations utilized for showing 7 dimensional pictures. This paper audits the fundamental ideas of holography, talking about top to bottom of the rule of obstruction on which it is based, and traces the wide uses of holography. Mrs. Ashwini N | Praveen Kumar T M"A Study on Designing 7D Holography" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd14373.pdf http://www.ijtsrd.com/engineering/information-technology/14373/a-study-on-designing-7d-holography/mrs-ashwini-n
Soft Shadow Rendering based on Real Light Source Estimation in Augmented RealityWaqas Tariq
The most challenging task in developing Augmented Reality (AR) applications is to make virtual objects mixed harmoniously with the real scene. To achieve photorealistic AR environment, three key issues must be emphasized namely consistency of geometry, illumination and speed. Shadow is an essential element to improve visual perception and realism. Without shadow, virtual objects will appear like it is floating and thus will make the environment look unrealistic. However, many shadow algorithms still have drawbacks such as producing sharp and hard-edged outlines, which make the shadow’s appearance unrealistic. Thus, this paper will focus on generating soft shadow in AR scene render based on real light sources position, where reflective sphere is used to create environment map image to estimate the light source from the real scene and render the soft shadows.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Nucleic Acid-its structural and functional complexity.
Keynote at 23rd International Display Workshop
1. BREAKING THE BARRIERS TO
TRUE AUGMENTED REALITY
CHRISTIAN SANDOR
CHRISTIAN@SANDOR.COM
KEYNOTE AT 23RD INTERNATIONAL DISPLAY WORKSHOP
FUKUOKA, JAPAN
7 DECEMBER 2016
2. BURNAR: FEEL THE HEAT
MATT SWOBODA, THANH NGUYEN, ULRICH ECK, GERHARD REITMAYR, STEFAN HAUSWIESNER,
RENE RANFTL, AND CHRISTIAN SANDOR. DEMO AT IEEE INTERNATIONAL SYMPOSIUM ON MIXED
AND AUGMENTED REALITY, BASEL, SWITZERLAND, OCTOBER 2011. BEST DEMO AWARD
3.
4.
5.
6. BURNAR: INVOLUNTARY HEAT SENSATIONS IN AR
PETER WEIR, CHRISTIAN SANDOR, MATT SWOBODA, THANH NGUYEN, ULRICH
ECK, GERHARD REITMAYR, AND ARINDAM DEY. PROCEEDINGS OF THE IEEE VIRTUAL
REALITY CONFERENCE, PAGES 43–46, ORLANDO, FL, USA, MARCH 2013.
7. WORKSHOP AT NAIST, AUGUST 2014
ARXIV E-PRINTS, ARXIV:1512.05471 [CS.HC], 13 PAGES
HTTP://ARXIV.ORG/ABS/1512.05471
8. DEFINITION:
1. UNDETECTABLE MODIFICATION OF USER’S PERCEPTION
2. GOAL: SEAMLESS BLEND OF REAL AND VIRTUAL WORLD
TRUE AR: WHAT?
HTTPS://EN.WIKIPEDIA.ORG/WIKI/
TURING_TEST
ALAN TURING. COMPUTING MACHINERY AND INTELLIGENCE. MIND, 59 (236): 433–
460, OCTOBER 1950.
INSPIRED BY ALAN TURING’S IMITATION GAME
PURPOSE: TEST QUALITY OF AI
9. RELATION TO OTHER TURING TESTS
COMPUTER GRAPHICS: MICHAEL D. MCGUIGAN.
GRAPHICS TURING TEST. ARXIV E-PRINTS, ARXIV:CS/
0603132V1, 2006
VISUAL COMPUTING: QI SHAN, RILEY ADAMS, BRIAN
CURLESS, YASUTAKA FURUKAWA, STEVEN M. SEITZ:
THE VISUAL TURING TEST FOR SCENE
RECONSTRUCTION. 3DV 2013: 25-32
VIRTUAL REALITY
AUGMENTED REALITY
DIFFICULTY
10. TRUE AR: WHY?
TRAINING: SPORTS & SKILLS
AMUSEMENT: INTERACTIVE STORIES
SCIENCE: PSYCHOLOGY & NEUROSCIENCE
LAW: FORENSICS & LOGISTICS OF CRIME SCENE
STAR TREK HOLODECK. HTTPS://EN.WIKIPEDIA.ORG/WIKI/HOLODECK
11. TRUE AR: HOW?
MANIPULATING
ATOMS
MANIPULATING
PERCEPTION
CONTROLLED
MATTER
PERSONALIZED AR IMPLANTED ARSURROUND
AR
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
sensor built into the surface of Lumen allows users to input
commands and manipulate shapes with their hands.
Other related project are PopUp and Glowbits devices [18,
33]. PopUp consists of an array of rods that can be moved
up and down using shape memory alloy actuators. The
PopUp, however, does not have a visual and interactive
component. Glowbits by Daniel Hirschmann (Figure 3) is a
2D array of rods with attached LEDs; the motorized rods
can move up and down and LEDs can change their colors.
Discussion
We have overviews a number of reasons why actuation can
be used in user interfaces. We summarize them in Table 1.
Applications Examples
Figure 2.7: Hand-fixed reference frame: Augmentations move w
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
12. There have been a number of shape
architecture. The FEELEX project [14
attempts to design combined shapes a
displays that can be explored by touc
of several mechanical pistons actuate
ered by a soft silicon surface. The i
onto its surface and synchronized wit
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13
display where each pixel can also ph
down (Figure 4). The resulting displa
graphic images and moving physica
observed, touched, and felt with the h
sensor built into the surface of Lumen
commands and manipulate shapes wit
Other related project are PopUp and
33]. PopUp consists of an array of ro
up and down using shape memory
PopUp, however, does not have a
component. Glowbits by Daniel Hirsc
2D array of rods with attached LED
can move up and down and LEDs can
Discussion
We have overviews a number of reas
be used in user interfaces. We summa
SACHIKO KODAMA. PROTRUDE, FLOW. ACM
SIGGRAPH 2001 ART GALLERY.
HTTP://PIXIEDUSTTECH.COM
CONTROLLED MATTER
HTTP://TANGIBLE.MEDIA.MIT.EDU/
PROJECT/INFORM
14. SURROUND VS. PERSONALIZED AR
MANIPULATING
ATOMS
MANIPULATING
PERCEPTION
CONTROLLED
MATTER
PERSONALIZED AR IMPLANTED ARSURROUND
AR
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
sensor built into the surface of Lumen allows users to input
commands and manipulate shapes with their hands.
Other related project are PopUp and Glowbits devices [18,
33]. PopUp consists of an array of rods that can be moved
up and down using shape memory alloy actuators. The
PopUp, however, does not have a visual and interactive
component. Glowbits by Daniel Hirschmann (Figure 3) is a
2D array of rods with attached LEDs; the motorized rods
can move up and down and LEDs can change their colors.
Discussion
We have overviews a number of reasons why actuation can
be used in user interfaces. We summarize them in Table 1.
Applications Examples
Figure 2.7: Hand-fixed reference frame: Augmentations move w
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
LIGHT FIELD DISPLAYS:
PERCEIVABLE
SUBSET
FULL
15. onvergence of the eyes, and the distance of the
ated close to the horopter (see Section 4.4.1) can
ge around the horopter at which this is possible
rea. However, in addition to absolute disparity,
y-based depth perception. For example, the gra-
., the depth gradient) influence depth perception
depth perception content dependent. Further-
isparity is processed and depth is perceived can
conflicting cues (e.g., inconsistent convergence
ection 4.4.2) and nonconflicting cues, and an up-
modulation frequency of disparity exists. A good
ed depth perception can be found in [103].
onvergence and retinal disparity are the main
, there are others.
commodation and visual depth of field.
LIGHT FIELD DISPLAYS
WWW.DISPLAYSBOOK.INFO
VISION:
DISPLAY AS WINDOW
408 9. Three-Dimensional Disp
Figure 9.35. Light-field recording and reconstruction principle: light rays just pas
a window (left), light rays converted into pixel values on a tiny image senso
a pinhole camera (center), light rays reproduced by a tiny projector being jus
inverted pinhole camera (right).
a distance. In principle, this turns out to be quite simple. Any cam
with a su ciently small aperture will just record angles and intens
of incident light rays and map them onto the pixels of its image sen
(Figure 9.35). Hence small cameras of, for example, 1 mm in size an
su cient number of (in this case very tiny) pixels can deliver the light-fi
data for just one window segment, which we will call a pixel of the wind
Any camera can in general be seen as an angle-to-position converter. T
conversion is relatively robust with respect to geometric errors.
Reproducing the light field on a display is straightforward (at leas
theory): we could use identical optical assemblies, this time illumina
SENSOR
ARRAY
DISPLAY
ARRAY
120 4. Basics of Visual Perception
• Focus e↵ects (blurring of objects not in the lens focus)
• Haze (softened image parts appear more distant)
• Color (bluish objects appear more distant)
• Motion parallax (images change when the head moves)
• Motion dynamics (objects change sizes and positions, in motion)
Convergence. As explained already, convergence is the inward rotation of
the eyes when targeting a distant object (Figure 4.24). The state of the
eye muscles gives us a hint about depth for up to 10 meters. However, we
don’t get extremely fine angular resolutions at this distance.
Figure 4.24. Convergence (up to 10 m).
Retinal disparity. For longer distances, the di↵erence between the two im-
ages projected onto the retinas (called retinal disparity) is far more e cient
than convergence. Near objects block distant ones at slightly di↵erent po-
sitions, resulting in di↵erent images generated by the left and right eyes
(Figure 4.25).
VERGENCEACCOMMODATION
GOAL: NATURAL HUMAN VISUAL PERCEPTION
18. depth perception content dependent. Further-
isparity is processed and depth is perceived can
conflicting cues (e.g., inconsistent convergence
ection 4.4.2) and nonconflicting cues, and an up-
modulation frequency of disparity exists. A good
ed depth perception can be found in [103].
onvergence and retinal disparity are the main
, there are others.
commodation and visual depth of field.
PERSONALIZED AR:
A SMARTER APPROACH
• Focus e↵ects (blurring of objects not in the lens focus)
• Haze (softened image parts appear more distant)
• Color (bluish objects appear more distant)
• Motion parallax (images change when the head moves)
• Motion dynamics (objects change sizes and positions, in motion)
Convergence. As explained already, convergence is the inward rotation of
the eyes when targeting a distant object (Figure 4.24). The state of the
eye muscles gives us a hint about depth for up to 10 meters. However, we
don’t get extremely fine angular resolutions at this distance.
Figure 4.24. Convergence (up to 10 m).
Retinal disparity. For longer distances, the di↵erence between the two im-
ages projected onto the retinas (called retinal disparity) is far more e cient
than convergence. Near objects block distant ones at slightly di↵erent po-
sitions, resulting in di↵erent images generated by the left and right eyes
(Figure 4.25).
The di↵erences at object edges can be perceived up to the crispness
limit of our vision. With a typical eye-to-eye distance (also called interoc-
ular distance) of about six centimeters and an angular resolution of one
KEY IDEA: MEASURE HUMAN VISUAL SYSTEM & DISPLAY SUBSET OF
LIGHT FIELD
BENEFIT: REDUCE REQUIRED DISPLAY PIXELS BY SEVERAL ORDERS
OF MAGNITUDE
WILL BE ACHIEVED WELL BEFORE SURROUND AR!
VERGENCEACCOMMODATION
19. PHILOSOPHY: TRUE AUGMENTED REALITY
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
Figure 2.7: Hand-fixed reference frame: Augmentations move
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
DISPLAYS
SharpView: Improved Clarity of Defocused C
See-Through Head-Mounted Dis
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas†
Goshiro Yamamoto† Takafumi Taketomi† Christian San
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
(a) (b) (c)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Displa
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. B
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an O
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtua
SharpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(a) (b) (c) (d) (e)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ABSTRACT
Augmented Reality (AR) systems, which utilize optical see-through
head-mounted displays, are becoming more common place, with
several consumer level options already available, and the promise of
additional, more advanced, devices on the horizon. A common fac-
tor among current generation optical see-through devices, though,
is fixed focal distance to virtual content. While fixed focus is not a
concern for video see-through AR, since both virtual and real world
imagery are combined into a single image by the display, unequal
distances between real world objects and the virtual display screen
in optical see-through AR is unavoidable.
In this work, we investigate the issue of focus blur, in particular,
the blurring caused by simultaneously viewing virtual content and
physical objects in the environment at differing focal distances. We
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
seen an increase in both popularity and accessibility with the re-
lease of several consumer level options, including Google Glass
and Epson Moverio BT-200, and announced future offerings, such
as Microsoft’s HoloLens, on the horizon. The transparent display
technology used in these HMDs affords a unique experience, allow-
ing the user to view on-screen computer generated (CG) content
while maintaining a direct view of their environment, a property
extremely well suited for augmented reality (AR) systems. Un-
arpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
ma⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
active Media Design Laboratory
stitute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(b) (c) (d) (e)
use and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
magery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
s in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ity (AR) systems, which utilize optical see-through
isplays, are becoming more common place, with
r level options already available, and the promise of
advanced, devices on the horizon. A common fac-
nt generation optical see-through devices, though,
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
n of a Semi-Automatic Optical See-Through
nted Display Calibration Technique
E, Yuta Itoh, Student Member, IEEE, Kohei Oshima, Student Member, IEEE,
E, Gudrun Klinker, Member, IEEE, and Christian Sandor, Member, IEEE
. (a) Display and camera system. (b) Task layout. (c) Pillars task. (d) Cubes task.
of optical see-through (OST) head-mounted displays (HMDs), there is a present need for
bration methods suited for non-expert users. This work presents the results of a user study
mines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM,
NDICA, a recently developed semi-automatic calibration method. Accuracy metrics used
ality values and error between perceived and absolute registration coordinates. Our results
e very accurate registration in the horizontal direction but caused subjects to perceive the
EEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 21, NO. ,4 APRIL2015
GEOMETRIC
ALIGNMENT
REMOVE
BLUR
ARTIFACTS
CREATE
CORRECT
BLUR
20. GEOMETRIC ALIGNMENT: SPAAM
MIHRAN TUCERYAN, YAKUP GENC, AND NASSIR NAVAB. SINGLE-POINT ACTIVE
ALIGNMENT METHOD (SPAAM) FOR OPTICAL SEE-THROUGH HMD CALIBRATION
FOR AUGMENTED REALITY. PRESENCE: TELEOPERATORS AND VIRTUAL
ENVIRONMENTS, 11(3):259-276, JUNE 2002.
Screen Point
World Point
tH-P
Screen Pixel (x,y)
tH-P
Screen Pixel (x,y)
tH-P
Screen Pixel (x,y)
21. Fig. 9. Stages of the experimental procedure. Every subject performs
an initial SPAAM calibration followed by the recording of eye images
and performance of both tasks using the SPAAM results. The HMD is
removed and refit to the subject, eye images recorded once again, and
both tasks for one of the remaining conditions performed. The proce
ET AL.:MOSER SUBJ ECTIVE EVALUATION OF A SEMI-AUTOMATIC O
Algorithm
Qua 2.5
SPAAM DSPAAM INDICA SPAAM DSPAAM INDICA SPAAM DSPAAM INDICA
2.5
Fig. 10. Mean subjective quality values for each calibration method dur-
ing each task, normalized to a 1–4 scale with 1 denoting the lowest
quality and 4 the highest. The values shown are across subjects with
individual plots for the Pillars task as well as each grid of the Cubes task.
Cubes-V shows normalized quality for the vertical cubes grid. Cubes-H
shows normalized quality for the horizontal cubes grid. Means with the
same letter, within each plot, are not significantly different at p 0.05
(Ryan REGWQ post-hoc homogeneous subset test).
X (Left−Right) Error (cm), ± 1 SEM
Z(Front−Back)Error(cm),±1SEM
−3
−2
−1
0
−1 0 1
●
SPAAM
Pillars
−1 0 1
●
DSPAAM
Pillars
−1 0 1
−3
−2
−1
0
●
INDICA
Pillars
A"
B"B"
Fig. 11. Mean Pillars task error along the X (Left-Right) and Z (Front-
Back) direction relative to the tracking coordinate frame. 0 indicates
no error. Error is reported as a distance value, with every 4 cm of er-
ror equating to a 1 pillar location difference in the respective direction.
Means with the same letter are not significantly different at p 0.05
(Ryan REGWQ post-hoc homogeneous subset test).
X (Left−Right) Error (cm)
Y(Up−Down)Err
−3
−2
−1 0 1
●
●
B"B"
Fig. 12. Mean vertical cubes grid task er
and X (Left-Right) direction relative to the
0 indicates no error. Error in each directio
value, with every 2 cm of error equating
difference in the respective direction. Mean
significantly different at p 0.05 (Ryan REG
subset test).
X (Left−Right) Error (cm)
Z(Front−Back)Error(cm),±1SEM
−5
−4
−3
−2
−1 0 1
●
SPAAM
Cubes−H
−1 0
●
DSPAAM
Cubes−H
B"
B"
Fig. 13. Mean horizontal cubes grid task er
and X (Left-Right) direction relative to the
0 indicates no error. Error in each directio
Fig. 1. Experimental hardware and design. (a) Display and camera system. (b) Task layout. (c) Pillars task. (
Abstract— With the growing availability of optical see-through (OST) head-mounted displays (HMDs), ther
robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents th
which both objectively and subjectively examines registration accuracy produced by three OST HMD calibratio
(2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method.
for evaluation include subject provided quality values and error between perceived and absolute registration c
show all three calibration methods produce very accurate registration in the horizontal direction but caused
distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method p
registration vertically and in perceived object distance overall. User assessed quality values were also th
INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycle
producing equal or superior on-screen registration compared to common OST HMD calibration methods. We
hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We c
the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for c
continuous calibration method for OST Augmented Reality.
Index Terms—Calibration, user study, OST HMD, INDICA, SPAAM, eye tracking
KENNETH MOSER, YUTA ITOH, KOHEI OSHIMA, EDWARD SWAN, GUDRUN KLINKER, AND
CHRISTIAN SANDOR. SUBJECTIVE EVALUATION OF A SEMI-AUTOMATIC OPTICAL SEE-
THROUGH HEAD-MOUNTED DISPLAY CALIBRATION TECHNIQUE. IEEE TRANSACTIONS
ON VISUALIZATION AND COMPUTER GRAPHICS, 21(4):491–500, MARCH 2015.
OUR METHOD: ONLY SPAAM ONCE
23. BLUR ARTIFACTS
DESIRED MOST DISPLAYS
ved Clarity of Defocused Content on O
hrough Head-Mounted Displays
r⇤ ‡ Damien Constantine Rompapas† J. Edward Swa
Takafumi Taketomi† Christian Sandor† Hirokazu K
y
ogy
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing L
Ritsumeikan Univ
(b) (c)
ect of focus blur in Optical See-Through (OST) Head-Mounted Displa
used in our study. (b) Simplified schematic of an OST AR system. B
viewed at unequal focal distances. (c), (d), (e): Views through an O
ausing the virtual image (d) to appear blurred; (e) an improved virtua
REAL PHOTO
“MATCHING” IMAGE
24. Christian Sandor Hirokazu Kato
& Engineering
e University
§Mobile Computing Laboratory
Ritsumeikan University
(c) (d) (e)
Head-Mounted Display (HMD) systems. (a) A user wearing the OST
an OST AR system. Blurring occurs when the virtual display screen
): Views through an OST Augmented Reality system, where the real
(e) an improved virtual image after application of SharpView.
KOHEI OSHIMA, KENNETH R MOSER, DAMIEN CONSTANTINE ROMPAPAS, J EDWARD
SWAN II, SEI IKEDA, GOSHIRO YAMAMOTO, TAKAFUMI TAKETOMI, CHRISTIAN SANDOR,
AND HIROKAZU KATO. IMPROVED CLARITY OF DEFOCUSSED CONTENT ON OPTICAL
SEE-THROUGH HEAD-MOUNTED DISPLAYS. IN IEEE SYMPOSIUM ON 3D USER
INTERFACES, PAGES 173–181, GREENVILLE, SOUTH CAROLINA, USA, MARCH 2016.
OUR METHOD: SHARPVIEW
SHARPVIEWREAL PHOTO
& Engineering
University
§Mobile Computing Laboratory
Ritsumeikan University
(c) (d) (e)
Head-Mounted Display (HMD) systems. (a) A user wearing the OST
an OST AR system. Blurring occurs when the virtual display screen
: Views through an OST Augmented Reality system, where the rea
(e) an improved virtual image after application of SharpView.
& Engineering
University
§Mobile Computing Laboratory
Ritsumeikan University
(c) (d) (e)
Head-Mounted Display (HMD) systems. (a) A user wearing the O
an OST AR system. Blurring occurs when the virtual display scre
: Views through an OST Augmented Reality system, where the r
& Engineering
University
§Mobile Computing Laboratory
Ritsumeikan University
(c) (d) (e)
Head-Mounted Display (HMD) systems. (a) A user wearing the OST
an OST AR system. Blurring occurs when the virtual display screen
: Views through an OST Augmented Reality system, where the rea
(e) an improved virtual image after application of SharpView.
“MATCHING” IMAGE
27. ESTIMATING EYE PSF
ng the Wiener filter, adjusted
o rendered images O and dis-
to the HMD.
P
|C|2
(4)
ocus blur, caused by accom-
play screen and world in OST
mined at run-time. We accom-
ssian function to approximate
te rates but with a reduction in
y modeling the intensity of the
screen, intersecting varying
ment of the pupil advisable. Additional system complexity m
Figure 2: Optical system formed by the user’s eye and an OST HM
The imaging plane corresponds to the user’s retina and the lens ap
ture to the user’s pupil.
expressed as sd, the ratio between the eye’s image p
is expressed as follows.
s : sd = v : u0
Here, sd is directly obtainable from equations (6) a
sd =
a
2
(1
u0
u
)
where a is pupil diameter, u is distance from the
world gaze point, and u0 represents the distance fro
image plane. When performing the actual convolut
filter and screen image, generally, sd may be con
ina. The intensity distribution, p, can be repre-
owing function.
P(x,y) =
1
2ps2
exp(
x2 +y2
2s2
) (5)
Gaussian function
!
SimplifiedGaussian functionto approximate the
decreasesprocessingtime allowingfasterupdate
28. OUR EXPERIMENTtion.
(a) (b)
Figure 5: Location of subjects relative to reference images placed at
25 cm (a) and 500 cm (b) from the subjects’ eyes.
capable of presenting stereo imagery at 60Hz with a maximum res-
olution of 960⇥540 per eye. The focal distance of the display was
both independently measured and confirmed by the manufacturer
30. MATCHING BLUR: REAL & VIRTUAL
DAMIEN CONSTANTINE ROMPAPAS, AITOR ROVIRA, SEI IKEDA, ALEXANDER
PLOPSKI, TAKAFUMI TAKETOMI, CHRISTIAN SANDOR, AND HIROKAZU KATO.
EYEAR: REFOCUSABLE AUGMENTED REALITY CONTENT THROUGH EYE
MEASUREMENTS. DEMO AT IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND
AUGMENTED REALITY, MERIDA, MEXICO, SEPTEMBER 2016. BEST DEMO AWARD
users can focus their eyes on any part of the scene, and the CG will always reflect th
of the user’s eye (See Figure 1.7 for an example).
Figure 1.5: Example of a user looking into the box enclosure. Left: Without EyeA
can observe the DoF mismatch between CG (white hat) and real scene (dragon).
With EyeAR, the CG’s DoF accurately matches the natural DoF of the real scene.
Because EyeAR is able to create accurate DoF images on OST-HMD display, al
HMDs should include this functionality. However, the applications of EyeAR a
limited to creating indistinguishable AR content as our system directly measures t
For example, Sharpview (Oshima et al., 2015) sharpens content displayed on the
by approximating the user’s eye point spread function based on the user’s eye pu
Typical AR on OST-HMD scene with the user focusing on the objects in
r objects in front there is a DoF mismatch between CG (hat) and real scene
ghlighted with the white circle.
1
OUR DISPLAYMOST DISPLAYS
34. OUR FIRST AR TURING TEST
0.40.50.60.70.8
Virtual Pillar
Correctguesses(%)
Green (0.25m) Blue (0.375m) Red (0.5m)
Autorefractometer
on
off
Figure 7: Overall percentage of correct guesses
for each pillar when the autorefractometer was on
(red line) and off (blue line). text in figure too
small. some text crossing boundaries
12 PARTICIPANTS
12 GUESSES
VIRTUAL
REAL
36. DISPLAYSSharpView: Improved Clarity of Defocused C
See-Through Head-Mounted Dis
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas†
Goshiro Yamamoto† Takafumi Taketomi† Christian San
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
(a) (b) (c)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Displa
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. B
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an O
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtua
SharpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(a) (b) (c) (d) (e)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ABSTRACT
Augmented Reality (AR) systems, which utilize optical see-through
head-mounted displays, are becoming more common place, with
several consumer level options already available, and the promise of
additional, more advanced, devices on the horizon. A common fac-
tor among current generation optical see-through devices, though,
is fixed focal distance to virtual content. While fixed focus is not a
concern for video see-through AR, since both virtual and real world
imagery are combined into a single image by the display, unequal
distances between real world objects and the virtual display screen
in optical see-through AR is unavoidable.
In this work, we investigate the issue of focus blur, in particular,
the blurring caused by simultaneously viewing virtual content and
physical objects in the environment at differing focal distances. We
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
seen an increase in both popularity and accessibility with the re-
lease of several consumer level options, including Google Glass
and Epson Moverio BT-200, and announced future offerings, such
as Microsoft’s HoloLens, on the horizon. The transparent display
technology used in these HMDs affords a unique experience, allow-
ing the user to view on-screen computer generated (CG) content
while maintaining a direct view of their environment, a property
extremely well suited for augmented reality (AR) systems. Un-
arpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
ma⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
active Media Design Laboratory
stitute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(b) (c) (d) (e)
use and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
magery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
s in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ity (AR) systems, which utilize optical see-through
isplays, are becoming more common place, with
r level options already available, and the promise of
advanced, devices on the horizon. A common fac-
nt generation optical see-through devices, though,
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
n of a Semi-Automatic Optical See-Through
nted Display Calibration Technique
E, Yuta Itoh, Student Member, IEEE, Kohei Oshima, Student Member, IEEE,
E, Gudrun Klinker, Member, IEEE, and Christian Sandor, Member, IEEE
. (a) Display and camera system. (b) Task layout. (c) Pillars task. (d) Cubes task.
of optical see-through (OST) head-mounted displays (HMDs), there is a present need for
bration methods suited for non-expert users. This work presents the results of a user study
mines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM,
NDICA, a recently developed semi-automatic calibration method. Accuracy metrics used
ality values and error between perceived and absolute registration coordinates. Our results
e very accurate registration in the horizontal direction but caused subjects to perceive the
EEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 21, NO. ,4 APRIL2015
PHILOSOPHY: TRUE AUGMENTED REALITY
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
Figure 2.7: Hand-fixed reference frame: Augmentations move
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
39. RESEARCH IN CANON
CHRISTIAN SANDOR, TSUYOSHI KUROKI, AND SHINJI UCHIYAMA. INFORMATION
PROCESSING METHOD AND DEVICE FOR PRESENTING HAPTICS RECEIVED FROM A
VIRTUAL OBJECT. JAPANESE PATENT 2006117732 (FILED 4/2006). PATENT IN CHINA,
EUROPE, AND US 8,378,997 (FILED 19 APRIL 2007). HTTP://GOO.GL/V3DAX
40. RESEARCH IN CANON
CHRISTIAN SANDOR, SHINJI UCHIYAMA, AND HIROYUKI YAMAMOTO. VISUO-
HAPTIC SYSTEMS: HALF-MIRRORS CONSIDERED HARMFUL. IN PROCEEDINGS OF
THE IEEE WORLD HAPTICS CONFERENCE, PAGES 292–297. IEEE, MARCH 2007.
TSUKUBA, JAPAN.
45. DISPLAYS APPLICATIONSSharpView: Improved Clarity of Defocused C
See-Through Head-Mounted Dis
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas†
Goshiro Yamamoto† Takafumi Taketomi† Christian San
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
(a) (b) (c)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Displa
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. B
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an O
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtua
SharpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(a) (b) (c) (d) (e)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ABSTRACT
Augmented Reality (AR) systems, which utilize optical see-through
head-mounted displays, are becoming more common place, with
several consumer level options already available, and the promise of
additional, more advanced, devices on the horizon. A common fac-
tor among current generation optical see-through devices, though,
is fixed focal distance to virtual content. While fixed focus is not a
concern for video see-through AR, since both virtual and real world
imagery are combined into a single image by the display, unequal
distances between real world objects and the virtual display screen
in optical see-through AR is unavoidable.
In this work, we investigate the issue of focus blur, in particular,
the blurring caused by simultaneously viewing virtual content and
physical objects in the environment at differing focal distances. We
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
seen an increase in both popularity and accessibility with the re-
lease of several consumer level options, including Google Glass
and Epson Moverio BT-200, and announced future offerings, such
as Microsoft’s HoloLens, on the horizon. The transparent display
technology used in these HMDs affords a unique experience, allow-
ing the user to view on-screen computer generated (CG) content
while maintaining a direct view of their environment, a property
extremely well suited for augmented reality (AR) systems. Un-
arpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
ma⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
active Media Design Laboratory
stitute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(b) (c) (d) (e)
use and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
magery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
s in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ity (AR) systems, which utilize optical see-through
isplays, are becoming more common place, with
r level options already available, and the promise of
advanced, devices on the horizon. A common fac-
nt generation optical see-through devices, though,
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
n of a Semi-Automatic Optical See-Through
nted Display Calibration Technique
E, Yuta Itoh, Student Member, IEEE, Kohei Oshima, Student Member, IEEE,
E, Gudrun Klinker, Member, IEEE, and Christian Sandor, Member, IEEE
. (a) Display and camera system. (b) Task layout. (c) Pillars task. (d) Cubes task.
of optical see-through (OST) head-mounted displays (HMDs), there is a present need for
bration methods suited for non-expert users. This work presents the results of a user study
mines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM,
NDICA, a recently developed semi-automatic calibration method. Accuracy metrics used
ality values and error between perceived and absolute registration coordinates. Our results
e very accurate registration in the horizontal direction but caused subjects to perceive the
EEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 21, NO. ,4 APRIL2015
PHILOSOPHY: TRUE AUGMENTED REALITY
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
Figure 2.7: Hand-fixed reference frame: Augmentations move
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
46. EDGE-BASED X-RAY
BENJAMIN AVERY, CHRISTIAN SANDOR, BRUCE H. THOMAS. IMPROVING SPATIAL
PERCEPTION FOR AUGMENTED REALITY X-RAY VISION. IN PROCEEDINGS OF THE IEEE VIRTUAL
REALITY CONFERENCE, PAGES 79–82. IEEE, MARCH 2009. LAFAYETTE, LOUISIANA, USA.
47.
48. SALIENCY X-RAY
CHRISTIAN SANDOR, ANDREW CUNNINGHAM, ARINDAM DEY, AND VILLE-VEIKKO
MATTILA. AN AUGMENTED REALITY X-RAY SYSTEM BASED ON VISUAL SALIENCY. IN
PROCEEDINGS OF THE IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND
AUGMENTED REALITY, PAGES 27–36, SEOUL, KOREA, OCTOBER 2010.
49. SALIENCY X-RAY
CHRISTIAN SANDOR, ANDREW CUNNINGHAM, AND MATTILA VILLE-VEIKKO.
METHOD AND APPARATUS FOR AN AUGMENTED REALITY X-RAY. US PATENT
APPLICATION 12/785,170 (FILED 21 MAY 2010). HTTP://GOO.GL/NCVZJ
50. MELTING
CHRISTIAN SANDOR, ANDREW CUNNINGHAM, ULRICH ECK, DONALD URQUHART, GRAEME JARVIS,
ARINDAM DEY, SEBASTIEN BARBIER, MICHAEL R. MARNER, SANG RHEE. EGOCENTRIC SPACE-DISTORTING
VISUALIZATIONS FOR RAPID ENVIRONMENT EXPLORATION IN MOBILE MIXED REALITY. IN PROCEEDINGS
OF THE IEEE VIRTUAL REALITY CONFERENCE, PAGES 47–50, WALTHAM, MA, USA, MARCH 2010.
55. Rehabilitation & Sports Medicine
Frozen Shoulder
SHOULDER - 26
Range of Motion Exercises:
Pendulum (Circular)
Let arm move in a circle
clockwise, then counter-
clockwise, by rocking body
weight in a circular pattern.
Repeat 10 times.
Do 3-5 sessions per day.
SHOULDER - 7
Range of Motion Exercises
(Self-Stretching Activities):
Flexion
Sitting upright, slide forearm
forward along table, bending
from waist until a stretch is
felt. Hold 30 seconds.
Repeat 1-4 times
Do 1 session per day.
SHOULDER - 11
Range of Motion Exercises
(Self-Stretching Activities):
External Rotation (alternate)
Keep palm of hand against
door frame, and elbow bent at
90°. Turn body from fixed
hand until a stretch is felt.
Hold 30 seconds.
Repeat 1-4 times
Do 1 session per day.
SHOULDER - 9
Range of Motion Exercises (Self-
Stretching Activities): Abduction
With arm resting on table, palm up, bring
head down toward arm and simultaneously
move trunk away from table. Hold 30
seconds.
Repeat 1-4 times Do 1 session per day.
SHOULDER - 73
Towel Stretch for Internal
Rotation
Pull involved arm up
behind back by pulling
towel upward with other
arm. Hold 30 seconds.
Repeat 1-4 times
Do 1 session per day.
SCAP SETS
Pull your shoulders back,
pinching the shoulder
blades together. Do not let
the shoulders come
forward. Hold 5-10
seconds.
Repeat 10 times
Do 1 session per day.
FUTURE WORK: MEDICAL APPLICATIONS
56. FUTURE WORK: MEDICAL APPLICATIONS
COURTESY OF HTTP://CAMPAR.IN.TUM.DE/MAIN/FELIXBORK
57. DISPLAYS APPLICATIONSSharpView: Improved Clarity of Defocused C
See-Through Head-Mounted Dis
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas†
Goshiro Yamamoto† Takafumi Taketomi† Christian San
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
(a) (b) (c)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Displa
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. B
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an O
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtua
SharpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(a) (b) (c) (d) (e)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ABSTRACT
Augmented Reality (AR) systems, which utilize optical see-through
head-mounted displays, are becoming more common place, with
several consumer level options already available, and the promise of
additional, more advanced, devices on the horizon. A common fac-
tor among current generation optical see-through devices, though,
is fixed focal distance to virtual content. While fixed focus is not a
concern for video see-through AR, since both virtual and real world
imagery are combined into a single image by the display, unequal
distances between real world objects and the virtual display screen
in optical see-through AR is unavoidable.
In this work, we investigate the issue of focus blur, in particular,
the blurring caused by simultaneously viewing virtual content and
physical objects in the environment at differing focal distances. We
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
seen an increase in both popularity and accessibility with the re-
lease of several consumer level options, including Google Glass
and Epson Moverio BT-200, and announced future offerings, such
as Microsoft’s HoloLens, on the horizon. The transparent display
technology used in these HMDs affords a unique experience, allow-
ing the user to view on-screen computer generated (CG) content
while maintaining a direct view of their environment, a property
extremely well suited for augmented reality (AR) systems. Un-
arpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
ma⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
active Media Design Laboratory
stitute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(b) (c) (d) (e)
use and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
magery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
s in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ity (AR) systems, which utilize optical see-through
isplays, are becoming more common place, with
r level options already available, and the promise of
advanced, devices on the horizon. A common fac-
nt generation optical see-through devices, though,
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
n of a Semi-Automatic Optical See-Through
nted Display Calibration Technique
E, Yuta Itoh, Student Member, IEEE, Kohei Oshima, Student Member, IEEE,
E, Gudrun Klinker, Member, IEEE, and Christian Sandor, Member, IEEE
. (a) Display and camera system. (b) Task layout. (c) Pillars task. (d) Cubes task.
of optical see-through (OST) head-mounted displays (HMDs), there is a present need for
bration methods suited for non-expert users. This work presents the results of a user study
mines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM,
NDICA, a recently developed semi-automatic calibration method. Accuracy metrics used
ality values and error between perceived and absolute registration coordinates. Our results
e very accurate registration in the horizontal direction but caused subjects to perceive the
EEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 21, NO. ,4 APRIL2015
PHILOSOPHY: TRUE AUGMENTED REALITY
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
Figure 2.7: Hand-fixed reference frame: Augmentations move
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
62. PHILOSOPHY: TRUE AUGMENTED REALITY
There have been a number of shape displays based on pin
architecture. The FEELEX project [14] was one of the early
attempts to design combined shapes and computer graphics
displays that can be explored by touch. FEELEX consisted
of several mechanical pistons actuated by motors and cov-
ered by a soft silicon surface. The images were projected
onto its surface and synchronized with the movement of the
pistons, creating simple shapes.
Lumen [32] is a low resolution, 13 by 13-pixel, bit-map
display where each pixel can also physically move up and
down (Figure 4). The resulting display can present both 2D
graphic images and moving physical shapes that can be
observed, touched, and felt with the hands. The 2D position
Figure 2.7: Hand-fixed reference frame: Augmentations move
example shows a user discussing a virtual map wit
map from di↵erent angles, he can pick it up from t
his belt and put it in his hand.
DISPLAYS APPLICATIONSSharpView: Improved Clarity of Defocused C
See-Through Head-Mounted Dis
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas†
Goshiro Yamamoto† Takafumi Taketomi† Christian San
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
(a) (b) (c)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Displa
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. B
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an O
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtua
SharpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
Kohei Oshima⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
†Interactive Media Design Laboratory
Nara Institute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(a) (b) (c) (d) (e)
Figure 1: The cause and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
HMD and related hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
and real world imagery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
world image (c) is in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ABSTRACT
Augmented Reality (AR) systems, which utilize optical see-through
head-mounted displays, are becoming more common place, with
several consumer level options already available, and the promise of
additional, more advanced, devices on the horizon. A common fac-
tor among current generation optical see-through devices, though,
is fixed focal distance to virtual content. While fixed focus is not a
concern for video see-through AR, since both virtual and real world
imagery are combined into a single image by the display, unequal
distances between real world objects and the virtual display screen
in optical see-through AR is unavoidable.
In this work, we investigate the issue of focus blur, in particular,
the blurring caused by simultaneously viewing virtual content and
physical objects in the environment at differing focal distances. We
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
seen an increase in both popularity and accessibility with the re-
lease of several consumer level options, including Google Glass
and Epson Moverio BT-200, and announced future offerings, such
as Microsoft’s HoloLens, on the horizon. The transparent display
technology used in these HMDs affords a unique experience, allow-
ing the user to view on-screen computer generated (CG) content
while maintaining a direct view of their environment, a property
extremely well suited for augmented reality (AR) systems. Un-
arpView: Improved Clarity of Defocused Content on Optical
See-Through Head-Mounted Displays
ma⇤ † Kenneth R Moser⇤ ‡ Damien Constantine Rompapas† J. Edward Swan II‡ Sei Ikeda§
Goshiro Yamamoto† Takafumi Taketomi† Christian Sandor† Hirokazu Kato†
active Media Design Laboratory
stitute of Science and Technology
‡Computer Science & Engineering
Mississippi State University
§Mobile Computing Laboratory
Ritsumeikan University
(b) (c) (d) (e)
use and effect of focus blur in Optical See-Through (OST) Head-Mounted Display (HMD) systems. (a) A user wearing the OST
hardware used in our study. (b) Simplified schematic of an OST AR system. Blurring occurs when the virtual display screen
magery are viewed at unequal focal distances. (c), (d), (e): Views through an OST Augmented Reality system, where the real
s in focus, causing the virtual image (d) to appear blurred; (e) an improved virtual image after application of SharpView.
ity (AR) systems, which utilize optical see-through
isplays, are becoming more common place, with
r level options already available, and the promise of
advanced, devices on the horizon. A common fac-
nt generation optical see-through devices, though,
Multimedia Information Systems—Artificial, augmented, and vir-
tual realities; I.4.4 [Image Processing and Computer Vision]:
Restoration—Wiener filtering
1 INTRODUCTION
Optical See-Through (OST) Head-Mounted Displays (HMDs) have
n of a Semi-Automatic Optical See-Through
nted Display Calibration Technique
E, Yuta Itoh, Student Member, IEEE, Kohei Oshima, Student Member, IEEE,
E, Gudrun Klinker, Member, IEEE, and Christian Sandor, Member, IEEE
. (a) Display and camera system. (b) Task layout. (c) Pillars task. (d) Cubes task.
of optical see-through (OST) head-mounted displays (HMDs), there is a present need for
bration methods suited for non-expert users. This work presents the results of a user study
mines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM,
NDICA, a recently developed semi-automatic calibration method. Accuracy metrics used
ality values and error between perceived and absolute registration coordinates. Our results
e very accurate registration in the horizontal direction but caused subjects to perceive the
EEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 21, NO. ,4 APRIL2015CONCLUSIONS
SUMMARY
AR: EXTREMELY HIGH POTENTIAL (UNLIKE VR)
INTERDISCIPLINARY: COMPUTER GRAPHICS, COMPUTER VISION,
OPTICS, PERCEPTION RESEARCH
REQUEST
CHAT TO ME AT IDW! LOOKING FOR GOOD COLLABORATORS
CHRISTIAN@SANDOR.COM
SLIDES WILL BE ONLINE WITHIN ONE HOUR!
HTTP://WWW.SLIDESHARE.NET/CHRISTIANSANDOR