This document presents an appearance-based method for representing and rendering cast shadows without explicit geometric modeling. A cubemap-like illumination array is constructed to sample shadow images on a plane. The sampled object and shadow images are represented using Haar wavelets. This allows rendering of shadows onto an arbitrary 3D background by linearly combining the wavelet basis images based on the scene geometry and lighting. Experiments demonstrate soft, realistic shadows can be rendered this way under novel illumination distributions specified by environment maps.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured shadow images and view angles, projection information and light configurations for the final projections. We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an
arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated image that changes due to the rotated projection of the transparent visual hull.
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the final projections.
We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
Visual hull construction from semitransparent coloured silhouettesijcga
This paper attempts to create coloured
semi
-
transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the f
inal projections.
We propose a method to convert coloured semi
-
transparent shadow images to a 3D visual hull
. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each vie
wpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
3D reconstruction is the process of capturing the shape and appearance of real objects. In this project we are using passive methods which only use sensors to measure the radiance reflected or emitted by the objects surface to infer its 3D structure.
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured shadow images and view angles, projection information and light configurations for the final projections. We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an
arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated image that changes due to the rotated projection of the transparent visual hull.
Visual Hull Construction from Semitransparent Coloured Silhouettes ijcga
This paper attempts to create coloured semi-transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the final projections.
We propose a method to convert coloured semi-transparent shadow images to a 3D visual hull. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each viewpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
Visual hull construction from semitransparent coloured silhouettesijcga
This paper attempts to create coloured
semi
-
transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the f
inal projections.
We propose a method to convert coloured semi
-
transparent shadow images to a 3D visual hull
. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each vie
wpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
Computer apparition plays the most important role in human perception, which is limited to only the visual band of the electromagnetic spectrum. The need for Radar imaging systems, to recover some sources that
are not within human visual band, is raised. This paper present new algorithm for Synthetic Aperture Radar (SAR) images segmentation based on thresholding technique. Entropy based image thresholding has
received sustainable interest in recent years. It is an important concept in the area of image processing.
Pal (1996) proposed a cross entropy thresholding method based on Gaussian distribution for bi-modal images. Our method is derived from Pal method that segment images using cross entropy thresholding based on Gamma distribution and can handle bi-modal and multimodal images. Our method is tested using
Synthetic Aperture Radar (SAR) images and it gave good results for bi-modal and multimodal images. The
results obtained are encouraging.
Object Detection for Service Robot Using Range and Color Features of an ImageIJCSEA Journal
In real-world applications, service robots need to locate and identify objects in a scene. A range sensor provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On the other hand, color information is an important property for object recognition task. The objective of this paper is to detect and localize multiple objects within an image using both range and color features. The proposed method uses 3D shape features to generate promising hypotheses within range images and verifies these hypotheses by using features obtained from both range and color images.
IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGEScseij
Image binarization is the process of separation of pixel values into two groups, black as background and
white as foreground. Thresholding can be categorized into global thresholding and local thresholding. This
paper describes a locally adaptive thresholding technique that removes background by using local mean
and standard deviation. Most common and simplest approach to segment an image is using thresholding.
In this work we present an efficient implementation for threshoding and give a detailed comparison of
Niblack and sauvola local thresholding algorithm. Niblack and sauvola thresholding algorithm is
implemented on medical images. The quality of segmented image is measured by statistical parameters:
Jaccard Similarity Coefficient, Peak Signal to Noise Ratio (PSNR).
Object detection for service robot using range and color features of an imageIJCSEA Journal
In real-world applications, service robots need to locate and identify objects in a scene. A range sensor
provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On
the other hand, color information is an important property for object recognition task. The objective of this
paper is to detect and localize multiple objects within an image using both range and color features. The
proposed method uses 3D shape features to generate promising hypotheses within range images and
verifies these hypotheses by using features obtained from both range and color images.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
Shadows create significant problems in many computer vision and image analysis tasks such as object
recognition, object tracking, and image segmentation. For a machine, it is very difficult to distinguish
between a shadow and a real object. As a result, an object recognition system may incorrectly recognize a
shadow region as an object. So the detection of shadows in images will enhance the performance of many
machine vision tasks. This paper implements a shadow detection method, which is based on Tricolor
Attenuation Model (TAM) enhanced with adaptive histogram equalization (AHE). TAM uses the concept of
intensity attenuation of pixels in the shadow region which is different for the three color channels. It
originates from the idea that if the minimum attenuated color channel is subtracted from the maximum
attenuated one, the shadow areas become darker in the resulting TAM image. But this resulting image will
be of low contrast due to the high correlation among R, G and B color channels. In order to enhance the
contrast, adaptive histogram equalization is used. The incorporation of AHE significantly improved the
quality of the detected shadow region.
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...Seiya Ito
第5回 3D勉強会@関東
Deep Reinforcement Learning of Volume-guided Progressive View Inpainting for 3D Point Scene Completion from a Single Depth Image
CVPR 2019 (oral)
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
High-resolution remote sensing images offer great possibilities for urban mapping. Unfortunately, shadows cast
by buildings during this some problems occurred .This paper mainly focus to get the high resolution colour
remote sensing image, and also undertaken to remove the shaded region in the both urban and rural areas. The
region growing thresholding algorithm is used to detect the shadow and extract the features from shadow region.
Then determine whether those neighbouring pixels are added to the seed points or not. In the region growing
threshold algorithm, Pixels are placed in the region based on their properties or the properties of nearby pixel
values. Then the pixels containing similar properties are grouped together and distributed throughout the image.
IOOPL matching is used for removing shadow from image. This method proves it can remove 80% shaded
region from image efficiently.
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
Abstract Image Segmentation plays a vital role in image processing. The research in this area is still relevant due to its wide applications. Image segmentation is a process of assigning a label to every pixel in an image such that pixels with same label share certain visual characteristics. Sometimes it becomes necessary to calculate the total number of colors from the given RGB image to quantize the image, to detect cancer and brain tumour. The goal of this paper is to provide the best algorithm for image segmentation. Keywords: Image segmentation, RGB
Computer apparition plays the most important role in human perception, which is limited to only the visual band of the electromagnetic spectrum. The need for Radar imaging systems, to recover some sources that
are not within human visual band, is raised. This paper present new algorithm for Synthetic Aperture Radar (SAR) images segmentation based on thresholding technique. Entropy based image thresholding has
received sustainable interest in recent years. It is an important concept in the area of image processing.
Pal (1996) proposed a cross entropy thresholding method based on Gaussian distribution for bi-modal images. Our method is derived from Pal method that segment images using cross entropy thresholding based on Gamma distribution and can handle bi-modal and multimodal images. Our method is tested using
Synthetic Aperture Radar (SAR) images and it gave good results for bi-modal and multimodal images. The
results obtained are encouraging.
Object Detection for Service Robot Using Range and Color Features of an ImageIJCSEA Journal
In real-world applications, service robots need to locate and identify objects in a scene. A range sensor provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On the other hand, color information is an important property for object recognition task. The objective of this paper is to detect and localize multiple objects within an image using both range and color features. The proposed method uses 3D shape features to generate promising hypotheses within range images and verifies these hypotheses by using features obtained from both range and color images.
IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGEScseij
Image binarization is the process of separation of pixel values into two groups, black as background and
white as foreground. Thresholding can be categorized into global thresholding and local thresholding. This
paper describes a locally adaptive thresholding technique that removes background by using local mean
and standard deviation. Most common and simplest approach to segment an image is using thresholding.
In this work we present an efficient implementation for threshoding and give a detailed comparison of
Niblack and sauvola local thresholding algorithm. Niblack and sauvola thresholding algorithm is
implemented on medical images. The quality of segmented image is measured by statistical parameters:
Jaccard Similarity Coefficient, Peak Signal to Noise Ratio (PSNR).
Object detection for service robot using range and color features of an imageIJCSEA Journal
In real-world applications, service robots need to locate and identify objects in a scene. A range sensor
provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On
the other hand, color information is an important property for object recognition task. The objective of this
paper is to detect and localize multiple objects within an image using both range and color features. The
proposed method uses 3D shape features to generate promising hypotheses within range images and
verifies these hypotheses by using features obtained from both range and color images.
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGsipij
Object tracking is one of the most important problems in computer vision. The aim of video tracking is to extract the trajectories of a target or object of interest, i.e. accurately locate a moving target in a video sequence and discriminate target from non-targets in the feature space of the sequence. So, feature descriptors can have significant effects on such discrimination. In this paper, we use the basic idea of many trackers which consists of three main components of the reference model, i.e., object modeling, object detection and localization, and model updating. However, there are major improvements in our system. Our forth component, occlusion handling, utilizes the r-spatiogram to detect the best target candidate. While spatiogram contains some moments upon the coordinates of the pixels, r-spatiogram computes region-based compactness on the distribution of the given feature in the image that captures richer features to represent the objects. The proposed research develops an efficient and robust way to keep tracking the object throughout video sequences in the presence of significant appearance variations and severe occlusions. The proposed method is evaluated on the Princeton RGBD tracking dataset considering sequences with different challenges and the obtained results demonstrate the effectiveness of the proposed method.
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
Shadows create significant problems in many computer vision and image analysis tasks such as object
recognition, object tracking, and image segmentation. For a machine, it is very difficult to distinguish
between a shadow and a real object. As a result, an object recognition system may incorrectly recognize a
shadow region as an object. So the detection of shadows in images will enhance the performance of many
machine vision tasks. This paper implements a shadow detection method, which is based on Tricolor
Attenuation Model (TAM) enhanced with adaptive histogram equalization (AHE). TAM uses the concept of
intensity attenuation of pixels in the shadow region which is different for the three color channels. It
originates from the idea that if the minimum attenuated color channel is subtracted from the maximum
attenuated one, the shadow areas become darker in the resulting TAM image. But this resulting image will
be of low contrast due to the high correlation among R, G and B color channels. In order to enhance the
contrast, adaptive histogram equalization is used. The incorporation of AHE significantly improved the
quality of the detected shadow region.
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...Seiya Ito
第5回 3D勉強会@関東
Deep Reinforcement Learning of Volume-guided Progressive View Inpainting for 3D Point Scene Completion from a Single Depth Image
CVPR 2019 (oral)
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...IJERA Editor
High-resolution remote sensing images offer great possibilities for urban mapping. Unfortunately, shadows cast
by buildings during this some problems occurred .This paper mainly focus to get the high resolution colour
remote sensing image, and also undertaken to remove the shaded region in the both urban and rural areas. The
region growing thresholding algorithm is used to detect the shadow and extract the features from shadow region.
Then determine whether those neighbouring pixels are added to the seed points or not. In the region growing
threshold algorithm, Pixels are placed in the region based on their properties or the properties of nearby pixel
values. Then the pixels containing similar properties are grouped together and distributed throughout the image.
IOOPL matching is used for removing shadow from image. This method proves it can remove 80% shaded
region from image efficiently.
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
Abstract Image Segmentation plays a vital role in image processing. The research in this area is still relevant due to its wide applications. Image segmentation is a process of assigning a label to every pixel in an image such that pixels with same label share certain visual characteristics. Sometimes it becomes necessary to calculate the total number of colors from the given RGB image to quantize the image, to detect cancer and brain tumour. The goal of this paper is to provide the best algorithm for image segmentation. Keywords: Image segmentation, RGB
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
This paper involves the review of the Shadow Detection and Removal in still images. No prior information has been used such as background images etc. for finding the shadows. It is a very challenging issue for the computer vision system that shadows effect the perception of artificial intelligence based machines in appropriately detecting the particular object as shadows also picked by them and detected as false positive objects. Also in surveillance, it affects the proper tracking of humans such as at airports. We proposed a method to remove shadows which eliminates the shadow much better than existed methods. RGB space has been used of the images and some morphological operations also applied to get better results.
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGEIJCSEA Journal
In real-world applications, service robots need to locate and identify objects in a scene. A range sensor provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On the other hand, color information is an important property for object recognition task. The objective of this paper is to detect and localize multiple objects within an image using both range and color features. The proposed method uses 3D shape features to generate promising hypotheses within range images and verifies these hypotheses by using features obtained from both range and color images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Integral imaging three dimensional (3 d)IJCI JOURNAL
In this paper, a three-dimensional (3-D) integral imaging (II) system to improve the viewing angle by using
the multiple illuminations is proposed. In this system, three collimated illuminations that are directed to
three different angles in order to get widen propagation angle of point light source (PLS). Among three
illuminations two slanted illuminations increase the propagation angle of PLS over the conventional
method. Simulation result shows that the viewing angle of proposed PLS displays is three times larger than
conventional PLS displays. In the simulation, we used Light Tools 6.3 to reconstruct an object.
A Review over Different Blur Detection Techniques in Image Processingpaperpublications3
Abstract: In last few years there is lot of development and attentions in area of blur detection techniques. The Blur detection techniques are very helpful in real life application and are used in image segmentation, image restoration and image enhancement. Blur detection techniques are used to remove the blur from a blurred region of an image which is due to defocus of a camera or motion of an object. In this literature review we represent some techniques of blur detection such as Blind image de-convolution, Low depth of field, Edge sharpness analysis, and Low directional high frequency energy. After studying all these techniques we have found that there are lot of future work is required for the development of perfect and effective blur detection technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Depth of Field Image Segmentation Using Saliency Map and Energy Mapping Techn...ijsrd.com
Image plays a vital role in image processing. In Image processing Depth of Field is to segment the relevant object from an Image. Depth of Field is the space between the near and extreme objects in a scene. The objective of this work is to segment the image using Low Depth of Field .Unsupervised segmentation is used to find low depth of field image. Saliency map and curve evaluation method is created and initialized for the image. Energy map have been employed so as to bring the desired result. Lipschitz function is used to generate the mathematical view of representation. Various Iteration methods have shown the graphical representation of an image. The Segmented results have shown the Object detection in an image.
Similar to APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS (20)
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
1. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
DOI : 10.5121/ijcga.2013.3301 1
APPEARANCE-BASED REPRESENTATION AND
RENDERING OF CAST SHADOWS
Dong-O Kim1
, Sang Wook Lee2
, and Rae-Hong Park1
1
Department of Electronic Engineering, School of Engineering, Sogang University, Seoul,
Korea
hyssop@sogang.ac.kr, rhpark@sogang.ac.kr
2
Department of Media Technology, Graduate School of Media, Sogang University,
Seoul, Korea
slee@sogang.ac.kr
ABSTRACT
Appearance-based approaches have the advantage of representing the variability of an object’s
appearance due to illumination change without explicit shape modeling. While a substantial amount of
research has been performed on the representation of object’s diffuse and specular shading, little attention
has been paid to cast shadows which are critical for attaining realism when multiple objects are present in
a scene. We present an appearance-based method for representing shadows and rendering them in a 3D
environment without explicit geometric modeling of shadow-casting object. A cubemap-like illumination
array is constructed, and an object’s shadow under each cubemap light pixel is sampled on a plane. Only
the geometry of the shadow-sampling plane is known relative to the cubemap lighting. The sampled images
of both the object and its cast shadows are represented using the Haar wavelet, and rendered on an
arbitrary 3D background of known geometry. Experiments are carried out to demonstrate the effectiveness
of the presented method.
KEYWORDS
Appearance, Diffuse, Specular, Shading, Rendering, Cubemap, Haar wavelet, Illumination
1. INTRODUCTION
The main advantage of the appearance-based representation is that without requiring any
information about object shape and BRDF, it accounts for the variability of object/image
appearance due to illumination or viewing change using a low-dimensional linear subspace. Much
research has been carried out for representing object’s diffuse and specular appearance for image
synthesis and recognition. However, there have been few appearance-based approaches to shadow
synthesis and rendering and this motivated us to develop a method to represent a shadow using
basis images for re-rendering and re-lighting. Shadows are critical for achieving realism when
multiple objects/backgrounds are rendered. Without shadows, it is impossible to realistically
render the image/object appearance into synthetic scenes.
Initial appearance-based approaches have represented a convex Lambertian object without
attached and cast shadows using a 3-D linear subspace determined with three images taken under
linearly independent lighting conditions [1,2]. Higher-dimensional linear subspaces have been
used for object recognition to deal with non-Lambertian surfaces, shadows and extreme
illumination conditions [3-6]. A set of basis images that span a linear subspace are constructed by
applying PCA (principal component analysis) to a large number of images captured under
different light directions.
2. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
2
Instead of the PCA-driven basis images, analytically specified harmonic images have recently
been investigated for spanning a linear subspace. A harmonic image is an image under lighting
distribution specified by spherical harmonics. Basri and Jacobs used a 9-D linear subspace
defined with harmonic images for face recognition under varying illumination [8]. Lee et al. has
shown that input images taken under 9 lights can approximate a 9-D subspace spanned by
harmonic images for an object or a class of objects, and presented a method for determining 9
light source directions [10]. Harmonic images have been recently used for efficient object
rendering under complex illumination by Ramamoorthi and Hanrahan and by Sloan et al. [9, 12].
All of the above harmonic image-based approaches require object geometry and BRDF to
compute harmonic images. Despite all the progress in object modeling, obtaining an accurate
model for shape and BRDF from a complex real object is still a highly challenging problem. Sato
et al. developed a method for analytically deriving basis images of a convex object from its
images taken under a point light source without requiring object model for shape and BRDF [7].
They also developed a method to determine a set of lighting directions based on the object’s
BRDF and the angular sampling theorem on spherical harmonics.
When an object’s images are captured under various illumination conditions, the images of the
object’s cast shadows can also be captured simultaneously with a plane placed on the opposite
side of the light. A set of those shadow images can be represented using appropriate basis
functions and their linear combination can be re-projected for rendering onto a new synthetic or
other 3-D scenes. We develop a method for representing shadow appearance for cubemap
illumination geometry and physically construct a cubemap-like lighting apparatus for shadow
image sampling/capture. The light-sampling distance is determined by the desired cubemap
resolution. In this specific illumination array, the size of the light pixel of the cubemap increases
as the sampling distance increases (or as the resolution decreases). Therefore, the size of the light
pixel limits the bandwidth of the cast shadows and prevents aliasing due to insufficient sampling.
We use the Haar wavelet for basis functions to represent appearance of both object and shadow.
There have been approaches to efficient object and shadow rendering using linear combination of
basis images. Nimeroff et al. [11] used pre-computed basis images for rendering a scene
including shadows and developed basis functions for natural skylight. Sloan et al. [12] used
spherical harmonics for pre-computed radiance transport (PRT), Ng et al. developed PRT
techniques based on wavelet and cubemap [13, 14], and Zhou et al. also used wavelet for
precomputed object and shadow fields (POF and PSF) under moving objects and illuminations
[15]. All of these approaches require object geometry and BRDF for pre-computing PRT or
POF/PSF. On the other hand, the goal of the work presented in this paper is to estimate shadow
projection in a cubemap using only sampled shadow images expanded on a set of wavelet basis
images. No model for object shape and BRDF is necessary and all that is required is the
calibration of the shadow sampling plane with respect to the light array.
The rest of this paper is organized as follows. Section 2 presents the proposed approach to the
sampling, representation and rendering of shadows into a 3-D scene. Sections 3 shows
experimental results and Section 4 concludes this paper.
2. OBJECT AND SHADOW APPEARANCE
2.1. Light Cube and Image Capture
In computer graphics, cubemaps are used frequently for representing area lighting such as
environment map. A wavelet basis is suitable for approximating light distribution on a cubemap,
containing area lights that vary from the size of cubemap pixel to the size of entire sky. A light-
array cube is constructed as shown in Figure 1 and an image of object and shadow is captured
under each cube pixel light. For our experimental study, the cube has a 32 ×32 light array only on
the top side and a shadow sampling plane was place near the bottom side. Calibration is
3. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
3
performed to establish the relationship between the light array, shadow sampling plane and
camera. Each light pixel element is constructed in a rectangular compartment with a white power
LED on the top and diffuse on the bottom as illustrated as shown in Figure 1. Seen from the
bottom, the light element looks uniformly bright square due to the diffuser and the gap between
the light squares is less than 2 mm.
2.2. Shadow Sampling in Light Cube
Let us consider the shadow sampling-bandwidth relationship in the light cube. Figure 2 illustrates
shadow casting geometry and shadow intensity for a point-like small object. The relationship
between the distance ds between shadow samples and the distance dl between light sources is
given as:
,
l
s
ls
h
h
dd = (1)
where hs and hl denote the distance between the object and the shadow sampling plane and that
between the light array plane and the object, respectively.
shadow
sampling
plane
object
light array
Figure 1. Cubemap light array, light element and shadow sampling plane
50 mm
30 mm
30 mm
diffuser
5-watt white power LED
light element
4. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
4
shadow
sampling
plane
light array
x x
I
dl
ds
dl
I
ds
hl
hs
(a) (b)
(c)
ds
(d)
ds
Figure 2. Shadows for point object: (a) geometry for point light source, (b) geometry for area
source, (c) intensity for point light source, and (d) intensity for area light source.
Point light sources cast hard shadow with wide bandwidth as shown in Figure 2 (a) and (c). This
type of shadow cannot be rendered effectively using a linear combination of sampled images
without aliasing when they are sampled under a finite number of point light sources. Only in the
restricted case that the shadow sampling plane is right beneath the object, it may be possible to
have sufficient sampling rate due to small ds.
An area light source spreads shadow shape and reduces the shadow bandwidth as depicted in
Figure 2 (b) and (d). Although the spread is rectangular in an ideal case, it normally appears as a
smoother function probably due to light diffraction at the source and scattering at the object. The
area sources closely packed in the light array simply blurs the shadows and thus serves as anti-
aliasing filter. Figure 3 shows the shadow casting geometry where the object and the shadow
sampling plane are both tilted with respect to the light array. In this case, the shadow spread is not
light array
dl
ds1
hs2hs1
ds2
object
Figure 3. Shadows for tilted object and shadow sampling plane.
5. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
5
constant but depends on the ratio of hs and hl, and thus provides varying degree of anti-aliasing
blurring. This analysis should be valid for attached shadows within an object and as well as
sampled cast shadows.
2.3. Image Representation and Shadow Mapping
The intensity I observed at point x on an object surface can be written as
sincos),;,,(),,(),,()(
2
0
2
0
iiiiooiiiiii ddVLI
xxxx ∫ ∫= (2)
where L denotes the lighting and V is the visibility function which indicates whether the light
reach at a point on the surface, ρ is the BRDF function, and (θi,φi) and (θo,φo) denote incoming
and outgoing directions of light, respectively. For a fixed camera, the term
V(x,θi,φi)ρ(x,θi,φi,θo,φo)cosθi can be simplified to the reflectance kernel RV(x,θi,φi) including the
visibility information. Therefore, the intensity I of a surface point can be written as:
∫ ∫=
2
0
2
0
.sin),,(),,()( iiiiiVii ddRLI xxx (3)
Equation (3) can represent the intensity on both object surface and shadow sampling plane.
Instead of pre-computing RV(x,θi,φi) for each surface point with synthetic objects as in [14], we
use the shadow image on the shadow capturing plane as a light mask for the scene underneath the
object. Shadow image for novel illumination is synthesized by linearly combining basis images of
the sampled shadows. We employed the Haar wavelet for the representation and compression of a
large number of sampled shadow images. Although there have been few directly relevant studies
on the choice of basis for this type of shadow images, the wavelet showed better results than
spherical harmonics for compression of a cubemap image [14] and for the estimation of
illumination from shadows [16].
Shadow mapping from the shadow sampling plane to a synthetic scene is illustrated in Figure 4.
For a given vertex v in the synthetic scene and light source in the direction l, the relationship
between v and its projected point p in the shadow sampling plane P: n·x+d=0 is given as Mv=p
where the projection matrix M is defined as [17]:
,
⋅−−−
−−+⋅−−
−−−+⋅−
−−−−+⋅
=
ln
ln
ln
ln
M
zyx
zzzyzxz
yzyyyxy
xzxyxxx
nnn
dlnldnlnl
dlnlnldnl
dlnlnlnld
(4)
where l denotes the direction vector from light source to vertex and n is the surface normal. This
mapping should be performed for the constant area light using Equation (2). If the BRDF is
approximated as constant ( ) over the solid angle to which the area light source is subtended,
Equation (2) can be rewritten as:
( ) ( ) iiiiii ddVLI
sincos,,
2
0
2
0∫ ∫= xx . (5)
6. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
6
This means that the mean of the shadow image intensities in the area subtending to the solid angle
at v can be used for irradiance weighting in rendering the vertex v. In other words, a linearly
combined shadow image for an illumination distribution is a visibility map for lighting elements
in the cube.
3. EXPERIMENTAL RESULTS AND DISCUSSIONS
Using the cube illumination system, several objects and their cast shadows are captured. Total of
32×32=1024 sample images are acquired and Figures 5(c) and 6(c) show one of the 1024 sample
images for two objects. They show both an object and its cast shadow.
Figure 5 compares a real scene and a rendered scene. Figures 5(a) and 5(b) show environment
maps having one-pixel and nine-pixel lights, respectively. Figures 5(c) and 5(d) are a sampled
view and a 9-light rendered view of a toy airplane illuminated from environment maps shown in
Figures 5(a) and 5(b), respectively. Figures 5(e) and 5(f) show the zoomed views of the object
and Figures 5(g) and 5(h) show the zoomed views of the shadows. Figure 5 shows that
specularities, cast shadows and attached shadows appear more softly in the rendered image with 9
adjacent lights than in the one-light sampled image. As shown in the wing region of the airplane
in figure 5(e) and figure 5(f), the self shadows in the sampled image look sharper than those in the
rendered image with 9 light sources. Figure 6 shows rendered results of a toy tree under novel
illumination. It can be seen in Figure 6 that the specularity in the base region of the toy tree is
substantially more dispersed in the 9-light rendered image than in the one-light sampled image.
The shadow is much softer in the 9-light rendered image.
Figure 7 show the rendered images with a synthetic scene under a novel illumination. Figure 7(a)
(7(c)) show the cast shadows on the synthetic scene under the illumination given by the
environment map shown in Figure 7(b) (7(d)). Figure 7(c) shows that the soft shadow is
realistically rendered.
Only one light array is used on the top side of the light cube in our implementation and
experiments. The construction of a more complete light cube with up to six light arrays and
multiple shadow sampling planes is a subject of our future work. In this paper, we show the
results of shadow rendering only under simple sets of adjacent lighting elements. However, any
complex environment map in the form of cubemap can be used to generate novel illumination.
This paper is focused only on shadow sampling and rendering. In the future, we intend to
compare several bases such as wavelet and spherical harmonics for their effectiveness in
representing specular and diffuse appearance as well as cast/attached shadows.
light array
Figure 4. Shadow mapping from the shadow sampling plane.
shadow
sampling
plane
p
v
M
n
l
7. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
7
(a) (b)
(c) (d)
(e) (f)
(g) (h)
Figure 5. Rendering in the shadow sampling plane under a novel illumination; (a) cubemap
(single light pixels), (b) cubemap (9 light pixels), (c) rendering onto the shadow sampling
plane, (d) rendering onto the shadow sampling plane, (e) object appearance, (f) object
appearance, (g) real shadow appearance, (h) shadow appearance.
8. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
8
(a) (b)
(c) (d)
(e) (f)
Figure 6. Rendering in the shadow sampling plane under a novel illumination with 1 pixel
light and 9 pixels light, respecively; (a) rendering onto the shadow sampling plane, (b)
rendering onto the shadow sampling plane, (c) object appearance, (d) object appearance, (e)
real shadow appearance, (f) shadow appearance.
9. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
9
4. CONCLUSIONS
We present an appearance-based method for representing shadows and rendering them into a
synthetic scene without explicit geometric modeling of shadow-casting object. The method
utilizes the special geometry of cubemap-like illumination with closely packed area light
elements and the resolution of synthesized shadows degrades gracefully without aliasing as the
resolution of the cubemap lighting decreases. We constructed a cube light with a light array on
its top and conducted experiments to show the efficacy of the presented approach.
ACKNOWLEDGEMENTS
The second author's work was supported by the National Research Foundation of Korea (NRF)
grant funded by the Korea government (MOE) No. NRF-2012R1A1A2009461.
(a) (b)
(c) (d)
Figure 7. Shadow rendering; (a) shadow rendering result, (b) environment map with 1 pixel
light, (c) shadow rendering result, (d). environment map with 4 pixels light.
10. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
10
REFERENCES
[1] A. Shashua, (1997) “On photometric issues in 3D visual recognition from a single image,” Int. J.
Computer Vision, Vol. 21, pp. 99–122.
[2] H. Murase and S. Nayar, (1995) “Visual learning and recognition of 3-D objects from
appearance,” Int. J. Computer Vision, Vol. 14, No. 1, pp. 5–24.
[3] P. Hallinan, (1994) “A low-dimensional representation of human faces for arbitrary lighting
conditions,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 995–999.
[4] A. Yuille, D. Snow, R. Epstein, and P. Belhumeur, (1999) “Determining generative models of
objects under varying illumination: Shape and albedo from multiple images using SVD and
integrability,” Int. J. Computer Vision, Vol. 35, No. 3, pp. 203–222.
[5] A. Georghiades, D. Kriegman, and P. Belhumeur, (1998) “Illumination cones for recognition
under variable lighting: Faces,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp.
52–59, 1998.
[6] A. Georghiades, D. Kriegman, and P. Belhumeur, (2001) “From few to many: Generative models
for recognition under variable pose and illumination,” IEEE Trans. Pattern Analysis and Machine
Intelligence, Vol. 23, No. 6, pp. 643–660.
[7] I. Sato, T. Okabe, Y. Sato, and K. Ikeuchi, (2003) “Appearance sampling for obtaining a set of basis
images for variable illumination,” Proc. IEEE Int. Conf. Computer Vision 2003, pp. 800–807, Nice,
France.
[8] R. Basri and D. Jacobs, (2001) “Lambertian Reflectance and Linear Subspaces,” in Proc. IEEE Intl.
Conf. Computer Vision, pp. 383–389.
[9] R. Ramamoorthi and P. Hanrahan, (2001) “A ssignal-processing framework for inverse rendering,”
in Proc. SIGGRAPH’01, pp. 117–128.
[10] K. C. Lee, J. Ho, and D. Kriegman, (2001) “Nine points of light: Acquiring subspaces for face
recognition under variable lighting,” in Proc. IEEE Conf. Computer Vision and Pattern
Recognition 01, pp. 519–526.
[11] J. S. Nimeroff, E. Simoncelli, and J. Dorsey, (1994) “Efficient rerendering of naturally illuminated
environments,” in 5th Eurographics Workshop on Rendering, pp. 359–373, Darmstadt, Germany.
[12] P. Sloan, J. Kautz, and J. Snyder, (2002) “Precomputed radiance transfer for real-time rendering in
dynamic, low-frequency lighting environments,” ACM Trans. Graphics, Vol. 21, No. 3, pp. 527–
536.
[13] R. Ng, R. Ramamoorthi, and P. Hanrahan, (2003) “All-frequency shadows using non-linear wavelet
lighting approximation,” ACM Trans. Graphics, Vol. 22, No. 3, pp. 376–381.
[14] R. Ng, R. Ramamoorthi, and P. Hanrahan, (2004) “Triple product integrals for all-frequency
relighting,” ACM Trans. Graphics, Vol. 23, No. 3, pp. 477–487.
[15] K. Zhou, Y. Hu, S. Lin, B. Guo, and H. Shum, (2005) “Precomputed shadow fields for dynamic
scenes,” ACM Trans. Graphics, Vol. 24, No. 3, pp. 1196–1201.
[16] T. Okabe, I. Sato and Y. Sato, (2004) “Spherical harmonics vs. Haar wavelets: basis for recovering
illumination from cast shadows,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition
2004, pp. I-50-57.
[17] E. Haines and T. Moeller, (2002) Real-Time Rendering, Natick, MA, USA, A K Peters.
11. International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.3, July 2013
11
Authors
Dong-O Kim received the B.S. and M.S. degrees in electronic engineering from Sogang University, Seoul,
Korea, in 1999 and 2001, respectively. Currently, he is working toward the Ph.D. degree in electronic
engineering at Sogang University. His current research interests are image quality assessment and
physics-based computer vision for computer graphics.
Rae-Hong Park was born in Seoul, Korea, in 1954. He received the B.S. and M.S. degrees in electronics
engineering from Seoul National University, Seoul, Korea, in 1976 and 1979, respectively, and the M.S.
and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, in 1981 and 1984,
respectively. In 1984, he joined the faculty of the Department of Electronic Engineering, Sogang
University, Seoul, Korea, where he is currently a Professor. In 1990, he spent his sabbatical year as a
Visiting Associate Professor with the Computer Vision Laboratory, Center for Automation Research,
University of Maryland at College Park. In 2001 and 2004, he spent sabbatical semesters at Digital Media
Research and Development Center (DTV image/video enhancement), Samsung Electronics Co., Ltd.,
Suwon, Korea. In 2012, he spent a sabbatical year in Digital Imaging Business (R&D Team) and Visual
Display Business (R&D Office), Samsung Electronics Co., Ltd., Suwon, Korea. His current research
interests are video communication, computer vision, and pattern recognition. He served as Editor for the
Korea Institute of Telematics and Electronics (KITE) Journal of Electronics Engineering from 1995 to
1996. Dr. Park was the recipient of a 1990 Post-Doctoral Fellowship presented by the Korea Science and
Engineering Foundation (KOSEF), the 1987 Academic Award presented by the KITE, the 2000 Haedong
Paper Award presented by the Institute of Electronics Engineers of Korea (IEEK), the 1997 First Sogang
Academic Award, and the 1999 Professor Achievement Excellence Award presented by Sogang
University. He is a co-recipient of the Best Student Paper Award of the IEEE Int. Symp. Multimedia
(ISM 2006) and IEEE Int. Symp. Consumer Electronics (ISCE 2011).
Sang Wook Lee received the BS degree in electronic engineering from Seoul National University, Seoul,
in 1981, the MS degree in electrical engineering from the Korea Advanced Institute of Science and
Technology (KAIST), Seoul, in 1983, and the PhD degree in electrical engineering from the University of
Pennsylvania in 1991. He is currently a professor of media technology at Sogang University, Seoul. He
was an assistant professor in computer science and engineering at the University of Michigan (1994-
2000), a postdoctoral research fellow and a research associate in computer and information science at the
University of Pennsylvania (1991-1994), a researcher at the Korea Advanced Institute of Science and
Technology (1985-1986), a research associate at Columbia University (1984-1985), and a researcher at
LG Telecommunication Research Institute (1983-1984). His major field of interest is computer vision,
with an emphasis on BRDF estimation, optimization for computer vision, physics-based vision, color
vision, range sensing, range data registration, and media art.