This document summarizes point-based global illumination (PBGI), an algorithm for calculating indirect lighting in diffuse scenes. It describes how PBGI works in two steps: (1) generating a point cloud of surfels representing direct lighting, and (2) calculating incoming indirect light by projecting surfels onto hemispheres. The algorithm can be approximated for speed by clustering surfels and using microbuffers instead of hemispheres. PBGI is used widely in movie production for effects like color bleeding and simulating area lights. Extensions to handle non-diffuse materials are also discussed.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
his session presents a detailed overview of the new lighting system implemented in DICEs Frostbite 2 engine and how it enables us to stretch the boundaries of lighting in BATTLEFIELD 3 with its highly dynamic, varied and destructible environments.
BATTLEFIELD 3 goes beyond the lighting limitations found in our previous battlefield games, while avoiding costly and static prebaked lighting without compromising quality.
We discuss the technical implementation of the art direction in BATTLEFIELD 3, the workflows we created for it as well as how all the individual lighting components fit together: deferred rendering, HDR, dynamic radiosity and particle lighting.
FlameWorks presentation from NVIDIA GTC 2014.
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.
Rendering Techniques in Rise of the Tomb RaiderEidos-Montréal
This cohesive overview of the advanced rendering techniques developed for Rise of the Tomb Raider presents a collection of diverse features, the challenges they presented, where current approaches succeed and fail, and solutions and implementation details.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
In a turbid water environment, scattering and absorption
phenomena due to suspended particles in the water
affect the propagation of light. This results in the degradation of
captured images due to poor visibility and low object contrast.
The image restoration process in turbid water involves estimation
of the amount of backscattering, once estimated correctly it can
be removed from the total radiance of an image to recover
the scene radiance. Our study proposes a simple method for
backscattered light estimation and image restoration based on
the dark channel prior (DCP) [1] and the light channel prior
[2]. We show that backscattered light in a degraded underwater
image can be estimated by simply computing its light channel
image and directly use this as the backscattered light map in the
image restoration process. The proposed approach is suitable
for uniformly illuminated underwater turbid scenes. We have
tested our method using underwater images from the TURBID
dataset in [3] and compared it to standard underwater image
restoration algorithms such as the underwater dark channel prior
(UDCP) method. The experimental results show that the proposed
method yields better image restoration results in comparison to
state-of-the-art methods at very low computational load. The
proposed method has low computational load due to its simplicity
in estimating the backscattered light.
What is global illumination and what are the techniques used to combat this problem in real-time applications. Talk briefly covers algorithms like instant radiosity, light propagation volumes and voxel cone tracing. Additional details within the slide notes.
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Rendering Technologies from Crysis 3 (GDC 2013)Tiago Sousa
This talk covers changes in CryENGINE 3 technology during 2012, with DX11 related topics such as moving to deferred rendering while maintaining backward compatibility on a multiplatform engine, massive vegetation rendering, MSAA support and how to deal with its common visual artifacts, among other topics.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
his session presents a detailed overview of the new lighting system implemented in DICEs Frostbite 2 engine and how it enables us to stretch the boundaries of lighting in BATTLEFIELD 3 with its highly dynamic, varied and destructible environments.
BATTLEFIELD 3 goes beyond the lighting limitations found in our previous battlefield games, while avoiding costly and static prebaked lighting without compromising quality.
We discuss the technical implementation of the art direction in BATTLEFIELD 3, the workflows we created for it as well as how all the individual lighting components fit together: deferred rendering, HDR, dynamic radiosity and particle lighting.
FlameWorks presentation from NVIDIA GTC 2014.
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.
Rendering Techniques in Rise of the Tomb RaiderEidos-Montréal
This cohesive overview of the advanced rendering techniques developed for Rise of the Tomb Raider presents a collection of diverse features, the challenges they presented, where current approaches succeed and fail, and solutions and implementation details.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
In a turbid water environment, scattering and absorption
phenomena due to suspended particles in the water
affect the propagation of light. This results in the degradation of
captured images due to poor visibility and low object contrast.
The image restoration process in turbid water involves estimation
of the amount of backscattering, once estimated correctly it can
be removed from the total radiance of an image to recover
the scene radiance. Our study proposes a simple method for
backscattered light estimation and image restoration based on
the dark channel prior (DCP) [1] and the light channel prior
[2]. We show that backscattered light in a degraded underwater
image can be estimated by simply computing its light channel
image and directly use this as the backscattered light map in the
image restoration process. The proposed approach is suitable
for uniformly illuminated underwater turbid scenes. We have
tested our method using underwater images from the TURBID
dataset in [3] and compared it to standard underwater image
restoration algorithms such as the underwater dark channel prior
(UDCP) method. The experimental results show that the proposed
method yields better image restoration results in comparison to
state-of-the-art methods at very low computational load. The
proposed method has low computational load due to its simplicity
in estimating the backscattered light.
What is global illumination and what are the techniques used to combat this problem in real-time applications. Talk briefly covers algorithms like instant radiosity, light propagation volumes and voxel cone tracing. Additional details within the slide notes.
論文紹介"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real...Ken Sakurada
CVPR2015(Best Paper Award)の論文紹介
"DynamicFusion: Reconstruction and Tracking of Non-‐rigid Scenes in Real-‐Time"
Richard A. Newcombe, Dieter Fox, Steven M. Seitz
内容に関して何かお気づきになりましたら,スライドに記載されているメールアドレスにご連絡頂けると幸いです
Speeding up probabilistic inference of camera orientation by function ap...Nicolau Werneck
Slides from my presentation at the WSCG2011. Describes some modifications to existing techniques for camera orientation estimation in "Manhattan Worlds" aiming at faster calculation times.
WAVELET DECOMPOSITION AND ALPHA STABLE FUSIONsipij
This article gives a new method of fusing multifocal images combining the Laplacian pyramid and the wavelet decomposition using the stable distance alpha as a selection rule. We start by decomposing multifocal images into several pyramid levels, then applying the wavelet decomposition to each level. the originality of this work is to use the stable distance alpha to fuse the wavelet images at each level of the Pyramid. To obtain the final fused image, we reconstructed the combined image at each level of the pyramid. We compare our method to other existing methods in the literature and we deduce that it is almost better.
Depth of Field Image Segmentation Using Saliency Map and Energy Mapping Techn...ijsrd.com
Image plays a vital role in image processing. In Image processing Depth of Field is to segment the relevant object from an Image. Depth of Field is the space between the near and extreme objects in a scene. The objective of this work is to segment the image using Low Depth of Field .Unsupervised segmentation is used to find low depth of field image. Saliency map and curve evaluation method is created and initialized for the image. Energy map have been employed so as to bring the desired result. Lipschitz function is used to generate the mathematical view of representation. Various Iteration methods have shown the graphical representation of an image. The Segmented results have shown the Object detection in an image.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
Similar to Introduction to Point Based Global Illumination (PBGI) (20)
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
Introduction to Point Based Global Illumination (PBGI)
1. Point Based Global Illumination
Karsten Daemen
KU Leuven
June 30, 2014
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 1 / 41
2. Table of Contents
1 Global Illumination
2 Point Based Global Illumination
The algorithm
Step I: Generation of the pointcloud
Step II: Calculating the Global Illimunation
Rendering Equation PBGI
Approximated Point Based Global Illumination
Clustering of the surfels
Projection on the hemisphere
Applications
3 Non Diuse Point Based Global Illumination
Nondiuse material and PBGI
The Non Diuse PBGI algorithm
4 References
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 2 / 41
3. Section I: Global Illumination
Figure: Image property of Disney/Pixar
c [Seymour, 2012]
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 3 / 41
5. Figure: Direct lighting only, image property of Dreamworks
c [Krivanek et al., 2010]
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 5 / 41
6. Figure: With indirect lighting, image property of Dreamworks
c [Krivanek et al., 2010]
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 6 / 41
7. Section II: Point Based Global Illumination
Figure: Property of Disney/Pixar
c [Christensen, 2010b]
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 7 / 41
8. What is Point Based Global Illimination?
A two step rendering algoritm to calculate the inderect lighting in a diuse
scene.
1 Step I: The
9. rst step generates a diuse pointcloud from the scene
consisting out of surfels.
2 Step II: The second step calculates the Global illumination with the
help of the diuse pointcloud.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 8 / 41
10. Step I: Generation of the diuse pointcloud
The diuse pointcloud is a point based representation of the re
ected
direct light in the scene. This is achieved by discretizing the surface of the
scene and approximate the re
ected diuse light of these surfaces through
surfels.
Figure: Diuse pointcloud of the demo scene.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 9 / 41
11. Step I: The Diuse surfels
A surfel is a tiny colored disk representing the outgoing radiance of a tiny
surface area. Therefore a surfel consists out of a: normal, radius, position
and radiance value.
Figure: Surfel representaton of the diuse teapot in the demo scene.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 10 / 41
12. Step II: Calculating the incoming indirect light
Since the surfels in the diuse pointcloud represent the re
ected radiance
from all of surfaces in the scene, the incoming indirect lighting of a certain
point can be generated by projecting all the individual surfels on the
hemisphere of that point.
Figure: Point Based Global Illumination
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 11 / 41
13. Step II: Projection of the surfels
During this projection process, overlapping of surfels has to be taken into
account. A surfel will block the radiance from other distant overlapping
surfels.
Figure: Projection of overlapping
surfels.
Figure: 2D representation of the
hemisphere of Figure 8 with the
projected surfels.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 12 / 41
14. Step II: Convolution with the BRDF
If all the incoming indirect (and direct) radiance in a point is known, one
needs only to convolve the incoming radiance with the local BRDF to get
the outgoing radiance to the camera.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 13 / 41
15. Rendering Equation PBGI
More formally the PBGI algoritm calculates the outgoing radiance L(p; so)
in point p in direction s0 according to the following formula [Daemen,
2014]:
L(p; so) = Le(p; so) +
X
q2Pd
f(p; so ! sq) lb
q V (p; q) G(p; q)
+ ldirect(p; so):
(1)
lbp
= lb1
p +
X
q2Pd
p lb1
q V (p; q) G(p; q)
l0
p = ldirect(p; :::)
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 14 / 41
16. It can be shown [Daemen, 2014] that equation (1) can be derived from the
rendering equation [Kajiya, 1986]:
Ltot
i (p) =
Z
S
L(q;sq) G(p; q) d!sq : (2)
This means that PBGI, as described here, is an unbiased algorithm for
diuse scenes. The more surfels are generated, the more the approximate
indirect lighting will convert to the real indirect lighting of the scene.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 15 / 41
17. Aproximated Point Based Global Illumination
The unbiased PBGI algorithm is however very unpractical. The projection
process on the hemispheres is complicated and slow. Therefore in practice,
the Approximated PBGI algorithm is used. This algorithm adds the
following alterations to the unbiased PBGI algorithm:
Clustering of the surfels
Projection on the hemisphere
These alterations considerately speed up the algorithm but Approximated
PBGI is no longer unbiased.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 16 / 41
18. Clustering of the surfels
Projection of every individual surfel on every hemisphere would however
take too much time. Distant surfels will be approximated by a clustered
surfel, a surfel that combines dierent individual surfels or other smaller
clustered surfels. These clustered surfels are generated in Step I and
stored with the individual surfels in an octree hierarchy.
Figure: Individual surfels. Figure: Single clustered
surfel
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 17 / 41
19. During the projection process of Step II, a cutting algorithm traverses the
octree hierarchy of the pointcloud and decides wich clustered or individual
surfel to use. This decission is made based on the occupied solid angle of
the surfel.
Figure: Clustering pointcloud in octree
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 18 / 41
20. In the case that the radiance values of certain surfels are too dierent to
be averaged out in a single radiance value, the radiance value of the
clustered surfel will be represented through a spheric harmonic function.
The radiance value of that clustered surfel will therefore be dependent on
the direction it's viewed from, this increases accuracy.
Figure: Approximate nodes in octree as a new surphel or spherical hormonical
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 19 / 41
21. Projection on the hemisphere
Projecting surfels on a perfect hemisphere is a mathematical complex
operations that requires the use of several computation heavy functions
(e.g.: sin(), sqrt(), ...). This operation becomes even more unpractical
when taking into account that overlapping surfels should be detected.
The approximate PBGI algorithm of [Christensen, 2010a] will therefore use
a microbuer as an approximation of the hemisphere. A microbuer can
be viewed as 6 framebuers placed on the faces of an axis aligned cube.
The surfels are projected on the pixels of the buers.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 20 / 41
23. nal outgoing radiance in the points of the microbuer can be
calculated by convolving the local BRDF with the pixels of the microbuer.
Figure: Microbuers displaying the surfel projections and their location in the scene.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 21 / 41
25. ts of approximate PBGI are:
Fast
Relative Low memory
No noise
Handles complex geometry (hair, explosions, displacements, ...)
In the following slides, we'll highlight the most common applications of
Approximate PBGI ...
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 22 / 41
27. rst
bounce of diuse indirect light (Color Bleeding).
Figure: Direct lighting Figure: 1st bounce indirect
diuse lighting
Figure: Demo scene with color
bleeding
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 23 / 41
28. Application: Multiple bounces
PBGI is not limited to single bounce indirect lighting, multiple bounces
can be generated by projecting the pointclouds on themselves.
Figure: Direct Figure: Direct + 1st
bounce indirect
lighting
Figure: Direct + 1st
and 2th bounce
indirect lighting
Figure: Direct + 1st,
2th and 3th bounce
indirect lighting
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 24 / 41
29. Application: Simulating area lights
The surfels doesn't necessarily only have to represent the re
ecting
radiance of surfaces, they can also represent emitting radiance. PBGI is
therefore able to simulate area lights.
Figure: Simulating area lights through the Stanford Bunny.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 25 / 41
31. nal gather step of the photon mapping algorithm can also be
processed through PBGI.
Figure: [Christensen, 2008], property of Pixar Inc.
R
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 26 / 41
32. Movie example: Bolt (2008)
Figure: [Christensen, 2010b], property of Disney
R
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 27 / 41
33. Movie example: Up (2009)
Figure: Point cloud from Up [Christensen, 2010b], property of Pixar Inc.
R
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 28 / 41
34. Movie example: Up (2009)
Figure: Direct Illumination, [Christensen, 2010b], property of Pixar Inc.
R
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 29 / 41
35. Movie example: Up (2009)
Figure: Global Illumination, [Christensen, 2010b], property of Pixar Inc.
R
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 30 / 41
36. Section III: Non Diuse Point Based Global
Illumination
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 31 / 41
37. Disclaimer:
This section gives a quick overview of some of my personal research and is
not used in the industry at the moment.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 32 / 41
38. Nondiuse material and PBGI
What happens when there's a nondiuse material in the scene? How will
the surfels model the re
ected radiance?
Figure: Re
ected radiance of surfel
on diuse surface
Figure: Re
ected radiance of surfel
on nondiuse surface
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 33 / 41
39. The PBGI algorithm will approximate these surfaces with black surfels.
These are surfels without any outgoing radiance, they can only block other
surfels. The PBGI algoritm therefore ignores indirect light coming from
nondiuse surfaces.
Figure: Nondiuse surfels are approximated with black surfels.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 34 / 41
40. Nondiuse indirect light
Figure: Direct lighting Figure: Diuse indirect lighting
Figure: Nondiuse indirect lighting Figure: Total lighting
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 35 / 41
41. The Non Diuse PBGI algorithm
The proposed NonDiuse Point Based Global Illumination (NDPBGI)
algorithm will not ignore this indirect light coming from nondiuse
surfaces through the help of nondiuse surfels.
Nondiuse surfels are surfels that are able to express outgoing radiance
that is dependent on the viewing direction. Their outgoing radiance can
no longer be expressed through a single value but through a Hemispherical
Radiance Approximation (HRA).
A HRA is a function that returns the outgoing radiance of a surfel for a
certain hemispherical direction.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 36 / 41
42. Since each nondiuse surfel has an HRA and there are several thousands
nondiuse surfels in a pointcloud, an HRA must be:
Compact, it may not take up much memory on the system.
Fast, it has to be evaluated quickly.
Accurate, it has to give an accurate approximation of the outgoing
radiance of the surfel.
These three conditions are con
icting, trade os should be made. We
developped an HRA based on Von Mises-Fischer distributions and used
this in our future experiremts.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 37 / 41
43. Calculating the non diuse indirect light
A few examples of the calculated indirect light coming from non diuse
surfaces by use of the Non Diuse PBGI algorihm:
Figure: Indirect non diuse lighting Figure: Total lighting
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 38 / 41
44. Figure: Indirect non diuse lighting Figure: Diuse indirect lighting
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 39 / 41
45. Figure: Indirect non diuse lighting Figure: Diuse indirect lighting
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 40 / 41
46. References
P Christensen. Point-based approximate color bleeding. Pixar Technical Notes, 2(5):6, 2008.
Per H. Christensen. Point-based global illumination for movie production. SIGGRAPH Comput. Graph., July 2010a.
Per H. Christensen. Point-based global illumination for movie production, August 2010b. URL
http://graphics.pixar.com/library/PointBasedGlobalIlluminationForMovieProduction/Slides.pdf.
K. Daemen. Punt-gebaseerde globale belichting voor niet-diuse scenes. June 2014.
James T. Kajiya. The rendering equation. SIGGRAPH Comput. Graph., 20(4):143{150, August 1986. ISSN 0097-8930. doi:
10.1145/15886.15902.
Jaroslav Krivanek, Marcos Fajardo, Per H Christensen, Eric Tabellion, Michael Bunnell, David Larsson, Anton Kaplanyan,
B Levy, and RH Zhang. Global illumination across industries. SIGGRAPH Courses, 2010.
Mike Seymour. The art of rendering (updated), 2012. URL http://www.fxguide.com/featured/the-art-of-rendering/.
Karsten Daemen (KU Leuven) Point Based Global Illumination June 30, 2014 41 / 41