Multi-Resolution Screen-Space Ambient Occlusion (MSSAO) is a technique that computes ambient occlusion (AO) at multiple screen-space resolutions to improve quality. It downsamples geometry and computes low-resolution AO. It then upsamples and blurs AO, recomputes at higher resolutions, and combines results. This captures multiple shadow frequencies better than single-resolution AO. Tests show MSSAO provides higher quality AO than other screen-space methods with less noise and blur, at costs of increased memory usage and potential temporal artifacts on thin geometry.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
This talk provides additional details around the hybrid real-time rendering pipeline we developed at SEED for Project PICA PICA.
At Digital Dragons 2018, we presented how leveraging Microsoft's DirectX Raytracing enables intuitive implementations of advanced lighting effects, including soft shadows, reflections, refractions, and global illumination. We also dove into the unique challenges posed by each of those domains, discussed the tradeoffs, and evaluated where raytracing fits in the spectrum of solutions.
With the highest-quality video options, Battlefield 3 renders its Screen-Space Ambient Occlusion (SSAO) using the Horizon-Based Ambient Occlusion (HBAO) algorithm. For performance reasons, the HBAO is rendered in half resolution using half-resolution input depths. The HBAO is then blurred in full resolution using a depth-aware blur. The main issue with such low-resolution SSAO rendering is that it produces objectionable flickering for thin objects (such as alpha-tested foliage) when the camera and/or the geometry are moving. After a brief recap of the original HBAO pipeline, this talk describes a novel temporal filtering algorithm that fixed the HBAO flickering problem in Battlefield 3 with a 1-2% performance hit in 1920x1200 on PC (DX10 or DX11). The talk includes algorithm and implementation details on the temporal filtering part, as well as generic optimizations for SSAO blur pixel shaders. This is a joint work between Louis Bavoil (NVIDIA) and Johan Andersson (DICE).
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
For this year's keynote at High Performance Graphics 2018, Colin Barré-Brisebois from SEED discussed the state of the art in real-time game ray tracing. He explored some of the connections between offline and real-time game ray tracing, and presented some of the open problems. Colin exposed a few potential solutions to those problems, and also proposed a call-to-arms on topics where the ray tracing research community and the games industry should unite in order to solve such open problems.
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time RaytracingElectronic Arts / DICE
In this presentation part of the "Introduction to DirectX Raytracing" course, Colin Barré-Brisebois of SEED discusses some of the challenges the team had to go through when going from raster to real-time raytracing for Project PICA PICA.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
The presentation describes Physically Based Lighting Pipeline of Killzone : Shadow Fall - Playstation 4 launch title. The talk covers studio transition to a new asset creation pipeline, based on physical properties. Moreover it describes light rendering systems used in new 3D engine built from grounds up for upcoming Playstation 4 hardware. A novel real time lighting model, simulating physically accurate Area Lights, will be introduced, as well as hybrid - ray-traced / image based reflection system.
We believe that physically based rendering is a viable way to optimize asset creation pipeline efficiency and quality. It also enables the rendering quality to reach a new level that is highly flexible depending on art direction requirements.
Talk by Graham Wihlidal (Frostbite Labs) at GDC 2017.
Checkerboard rendering is a relatively new technique, popularized recently by the introduction of the PlayStation 4 Pro. Many modern game engines are adding support for it right now, and in this talk, Graham will present an in-depth look at the new implementation in Frostbite, which is used in shipping titles like 'Battlefield 1' and 'Mass Effect Andromeda'. Despite being conceptually simple, checkerboard rendering requires a deep integration into the post-processing chain, in particular temporal anti-aliasing, dynamic resolution scaling, and poses various challenges to existing effects. This presentation will cover the basics of checkerboard rendering, explain the impact on a game engine that powers a wide range of titles, and provide a detailed look at how the current implementation in Frostbite works, including topics like object id, alpha unrolling, gradient adjust, and a highly efficient depth resolve.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
CCTV CAMERA, DOME CAMERA, C MOUNT CAMERA, CS MOUNT CAMERA, ECONOMY CAMERA, BOX CAMERA, IR CAMERA, PAN TILT DOME CAMERA, VANDAL PROOF DOME CAMERA, VARI FOCAL AUTO IRIS CAMERA, ELEGANT DOME CAMERA, SPEED DOME CAMERA, DVR CARDS, 4 CHANNEL DVR CARD, 8 CHANNEL DVR CARD, 16 CHANNEL DVR CARD, 32 CHANNEL DVR CARD, 64 CHANNEL DVR CARD, SWITCHERS, WEB SERVERS, POWER SUPPLIES, ACCESSORIES, VIDEO DISTRIBUTOR AMPLIFIER, SURGE PROTECTOR, TELEMETRY RECEIVER
Visual Impression Localization of Autonomous Robots_#CASE2015Soma Boubou
This paper proposes a novel localization approach based on visual impressions. We define a visual impression as the representation of a HSV color distribution of a place. The representation uses clustering feature (CF) tree to manage the color distribution and we propose to weight each CF entry to indicate its importance. The method compares the navigating tree, which is created by the robot from its observations, with the available reference trees of the environment. In addition, we propose a new similarity measure to compare two CF trees which represent the visual impressions of the corresponding two places. The method is tested on two data sets collected in different environments. The results of the experiments show the effectiveness of the proposed method.
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
This slide introduces transformer-xl which is the base paper for xl-net. You can understand what is the major contribution of this paper using this slide. This slide also explains the transformer for comparing differences between transformer and transformer-xl. Happy NLP!
Evaluating the triggering of a landslide through the Limit Equilibrium Approach: methods of slices (Fellenius, Bishop, Janbu, Morgenstern and Price, Spencer). Structural intervention measures for hazard mitigation: hybrid methods for designing active and passive protective structures (anchored retaining walls, slope stabilizing piles, earth reinforced embankments). Advanced numerical approaches for evaluating the propagation of a landslide: DEM and SPH methods. Analysis and Design of structures interacting with soil: ground anchors, sheet-piles, retaining walls, advanced retaining devices.The design of slope stabilizing system, by means of GeoSlope. Designing Active & Passive stabilizing systems for the critical case with rigid square bearing plates with a deep ground anchor.
Restricting the Flow: Information Bottlenecks for Attributiontaeseon ryu
101번째 영상,
펀디멘탈팀 김준호 님의
Restricting the Flow: Information Bottlenecks for Attribution
논문 리뷰 입니다
Explanable ai, xai와 관련된 페이퍼 입니다! 관련되어 관심있으신 분들이 많은 도움이 되시길 바랍니다! attribution map을 이용하여 결과물에 영향을 준 네트워크의 gradient를 직접 추적하여 비주얼 explanation을 추적하는 방식입니다! 펀디멘탈팀 김준호님이 밑바닥부터 자세한 리뷰를 도와주셨습니다!
오늘도 많은 관심과 사랑 감사합니다!
This document presents an example of analysis design of slab using ETABS. This example examines a simple single story building, which is regular in plan and elevation. It is examining and compares the calculated ultimate moment from CSI ETABS & SAFE with hand calculation. Moment coefficients were used to calculate the ultimate moment. However it is good practice that such hand analysis methods are used to verify the output of more sophisticated methods.
Also, this document contains simple procedure (step-by-step) of how to design solid slab according to Eurocode 2.The process of designing elements will not be revolutionised as a result of using Eurocode 2. Due to time constraints and knowledge, I may not be able to address the whole issues.
그래픽 최적화로 가...가버렷! (부제: 배치! 배칭을 보자!) , Batch! Let's take a look at Batching! -...ozlael ozlael
그래픽 최적화를 위해서 필수로 알아야하는 드로우콜과 배칭을 심화하여 다룹니다. 드로우콜의 개념, Batch와 SetPass Call의 차이, 드로우콜 감소 방법 등등 기초 개념부터 실무적인 깊이까지 다룹니다. 기반지식 여부 상관 없이 모두 들으실 수 있습니다. 특히 아티스트와 프로그래머에게 도움이 될 것입니다.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
12. MSSAOOverview Geometry g-buffer (1024x1024) Downsample Compute AO (256x256) AO buffer (256x256) g-buffer (256x256) Blur & Upsample g-buffer (512x512) Compute AO (512x512) AO buffer (512x512) Blur & Upsample Compute AO (1024x1024) g-buffer (1024x1024) AO buffer (1024x1024) 12
13.
14. Cap 𝑟𝑖𝑝 to some value 𝑟𝑚𝑎𝑥 (typical value is 5) 1024x1024 16-point Poisson disk 512x512 256x256 Works well with a 3x3 Gaussian filter 13
15. MSSAOComputing AO Low-variance Cheap Biased 𝐪i 𝐧 𝜽𝒊 𝒅𝒊 𝐩 𝐴𝑂(𝐩)=1𝑁𝑖=1𝑁𝜌(𝐩,𝑑𝑖)(𝐧∙𝐪𝑖−𝐩) model after the Monte-Carlo approximation of 𝐴𝑂𝐩=1𝜋Ω𝜌𝐩,𝑑𝑖cos𝜃𝑖𝑑𝜔𝑖 14
16. MSSAOOverview Geometry g-buffer (1024x1024) Downsample Compute AO (256x256) AO buffer (256x256) g-buffer (256x256) Blur & Upsample g-buffer (512x512) Compute AO (512x512) AO buffer (512x512) Blur & Upsample Compute AO (1024x1024) g-buffer (1024x1024) AO buffer (1024x1024) 15
18. MSSAOCombining AO Values 𝐴𝑂𝑓𝑖𝑛𝑎𝑙=1−1−max𝐴𝑂𝑖1−avg𝐴𝑂𝑖 max𝐴𝑂𝑖and avg𝐴𝑂𝑖are computed by “propagating” appropriate values across resolutions Avoid underestimating AO by ensuring𝐴𝑂𝑓𝑖𝑛𝑎𝑙≥max𝐴𝑂𝑖 And a plausible heuristic𝐴𝑂𝑓𝑖𝑛𝑎𝑙∝avg𝐴𝑂𝑖 17
29. MSSAOConclusions Inaccurate Local AO Over/underestimated AO Low quality Noise Blur Use more memory Poor temporal coherence on very thin geometry Not too noticeable Errors due to the use of coarse resolutions Not too noticeable unless compared with ground-truths Simple Fast General Easy to integrate Capture multiple shadow frequencies 26
AO is the lighting phenomenon under the direct illumination of a diffuse, uniform, spherical light source surrounding the scene, which is this yellow ring in this figure.Under such a lighting condition, less exposed areas such as the point q here will receives less light, and becomes darker than more exposed areas such as point p.Note that AO is not a real-world phenomenon, since we are assuming the light is uniform in every direction, which is rarely true in the real world.Nevertheless, it can make a scene look more realistic, that’s why it is widely adapted in, for example, CG movies.
Formally, for each point p, AO is define by this integral over a hemisphere above p.Intuitively, it can be understood as the cosine-weighted fraction of the tangent hemisphere that is occluded.It is important to note that we only look for occluders inside a certain hemisphere that has a certain radius d_max.Because otherwise, AO does not work for enclosed environment such as this room, everything inside it will be totally dark.Now it is natural to evaluate AO using Monte Carlo ray casting.But do we really need to do that all the time if the only thing we want is to darken some creases or holes?
It turns out there is a much more efficient approach to approximate AO that only uses information available in screen-space, namely per-pixel depths and normals.As an example, for this point p here, we can generate a few samples inside a hemisphere above it and project the samples back to the eye.If a sample’s depth is larger than the depth value store in the depth buffer, the sample is considered an occluder that block lights from p.This is efficient because sampling is much faster than ray casting.But this approach also is inaccurate, for example, as you can see, some samples can be identified as occluder, yet do not block light from reaching p in that direction.
It turns out SSAO has other disadvantages as well.On the other hand, it has a number of important benefits.That’s why it’s useful in applications that care less about accuracy, but more about performance, such as games.We want to overcome some of these limitations, particularly the local AO, noise, and blur problems.
Our algorithm is based on this intuition:The hemisphere above p can be partition into several sub-hemispheres, smaller ones contained in bigger ones.For each hemisphere, we compute AO caused by occluders inside that hemisphere.The final AO value is naturally the maximum of all the partial AO values computed for all the hemispheres.
Anotherobservation is that occlusion caused by distant occluders is often low-frequency, and can be computed at coarser resolutions to save computation time.So we correspond each hemisphere to a resolution, the bigger the hemisphere, the coarser the resolution.Now the problem is AO computed at coarser resolutions may miss occlusion caused by nearby occluders, so the final AO is no longer the maximum value, but larger than it.We use a heuristic to compensate for this, as we modulate the maximum value by the average AO value across all resolutions.So the final AO is a function of these two values, and we must design f so that the final AO value is always larger or equal to the maximum value, and proportional to the average value.
Our algorithm works like this:First we render the scene at the finest resolution to a g-buffer which contains per-pixel eye-space coordinates and normals, then downsample multiple times to get low-resolution g-buffers.Then we start from the coarsest g-buffer, compute AO for all pixels at that resolution and store the results in an AO buffer.The AO buffer is then blurred and upsampled, and the result is put through the next rendering pass, where we compute AO for the next finer resolution and combine that with the upsampled results, output another AO buffer, which is agained blurred and upsampled, and the result is used as input to the next rendering pass, where we compute AO at a finer resolution, and so on, until the we reach the final resolution.At this point, we output the highest-resolution AO buffer as the final result.
Here is an example of combining AO values from different resolutions.In this example we use 5 levels of resolution. Each resolution captures a different AO frequencies, and they are all finally combined together to produce the image in the lower right.
Nowlet’s get to the details.First we’re gonna look at the downsampling pass.
For each low-resolution pixel p, we look for the four nearest pixels to p at the finer resolution, p_1 to p_4, and sort them according to their liner z values.Then we take the eye-space coordinates of the two “middle” pixels and average them to get the eye-space coordinates of p.This is similar to taking the median “values” of the four pixels.Now taking the median does not always make sense if the four pixels are very different in depth.In fact it creates artifacts where geometry beyond the AO radius of influence can occlude a point, since by taking the median, we are in effects changing the relative distance between surfaces.So we check if the maximum depth difference among four pixels are large enough, we choose to keep the eye-space coordinates of just one of the four pixels.Now the reason we do not do this all the time is that the median method does help reducing self-occlusion artifacts and also provides better temporal coherence.You may haved noticed that the normals are treated in the same way as the eye-space coordinates.The reason we do not normalize the sum, instead divide it by 2, which may not make sense, is that it is not necessary to normalize the normals.In fact we have found that dividing it by 2 gives better looking results and also less over-occlusion artifacts.
Now let’s move on to the computing AO part.
Firstly we will look at how samples are taken in screen space.First, for each pixel p at some particular resolution Res_i, we project the AO radius of influence to screen space to get a “ring” of radius r_i(p) around p. Now remember we do not want to sample in this whole ring, but usually a smaller ones, corresponding to some inner hemispheres, so we cap the screen-space sampling radius to some value, typically 5, and sample a 11x11 region in screen-space.We have found that for coarser resolutions, using the sampling scheme on the left gives best results without degrading performance.It also works well for a 3x3 Gaussian blur which is applied later.For pixels at the finest resolution, we don’t want to use the sampling scheme on the left because there is no blur pass at the final resolution, so we use a 16-point poisson disk pattern instead.Notice we do not jitter this pattern but use the same pattern for every pixel.It produces some aliasing but not too noticeable given the small kernel size.
Now that we have collected, let’s say N samples, we put them through the first formula to compute the AO caused by these N samples.This formula is modeled after the Monte-Carlo approximation of the original formula for AO.You can see the correspondence highlighted by the colors.The nice thing about our formula is that it gives low-variance results, in the sense that nearby pixels have quite similar AO values.One of the reasons is that our samples are distributed in two dimensions instead of 3, we do not distribute samples in the direction dimension like some other methods.We pick samples directly from screen space, the use texture lookup to fetch their 3D-coordinates and normals, and plug the values right into the formula.it is also cheap to compute, requiring only a dot product operation for each sample.The falloff function rho is just a simple quadratic one in terms of the distance d_i.But sampling and computing AO this way is certainly biased, with regards to true AO, because neighboring pixels in screen-space often do not correspond to uniform directions in object-space.The results is that the shape and intensities of the shadows look incorrect comparing to ground truths.This is a common problem in SSAO not just our method.
Next we turn to the blurring and upsamplingpass.
Let me talk about the upsamplingmethod first.We use a typical bilateral upsamplinghere.For each high-resolution pixel p, we weight and blend the AO values from its 4 nearest low-resolution pixels p_1 to p_4.The weight for each pixel is a product of 3 weights, w_zis the weight due to depth differences, w_nis the weight due to normal differences, and w_bis a bilinear weight due to differences in screen-space coordinates.Using bilateral upsampling prevents occlusion to leak through large depth and normal differences and at the same time smoothly blends the AO values from the four low-res pixels, which prevents the results from getting blocky artifacts.The blur which is done before we upsample is very similar, it is a bilateral filter which uses a very small 3x3 kernels and a Gaussian weight instead of a bilinear one.
Now we talk about how the AO values are combined together.At the finest resolution ,we use this formula to compute the final AO value for each pixel.The maximum and average AO values are obtained by propagating certain values up from coarser resolutions and I’m not going to get into the details here, which is pretty hairy.Notice that the formula we use here satisfies both conditions we set in the beginning.
Finally, we employ a temporal filter to reduce the shimmering or flickering effects due to the use of multiple resolutions.For each pixel in screen-space, we project it back to object space, undo whatever transformation from the previous frame, and project it to last frame’s screen-space.We then fetch an AO value from last frame at that screen-space coordinates and assign a weight to it, if the pixel was occluded or outside of the screen in the last frame, the weight is 0, otherwise it is 0.5Then we linearly blend the previous-frame AO value with the AO computed in the previous slide, to obtain the AO value for the current frame.
Sowe can look at some results, on the left is the method by Blizzard Entertainment which is noisy, on the right is ours.Note that the computation time is roughly the same, more on that later.
Here we compare our result with Horizon-based AO by NVIDIA.Their result is quite blurry.Our image looks sharper since we do not blur at the final resolution.
Here is another comparison, with a method called Volumetric AO,which is more recent than the other two.Again, our result looks nicer.
Here we compare our result with ray-traced result using Blender.There are over occlusion here and there, but since accuracy is not our main concern, it is not much of a problem.
Here is a closer look to show that our result is free of both noise and blur and preserves high frequency details better than other methods.
Here is another advantage of our method.In existing SSAO methods, when the AO radius of influence is large, small details are lost.Our method pick up multiple AO scales in different frequencies and retain them in the final image.
Here we compare the computation time among methods.Theparameters for each method are the same as those used to produced the previously shown comparison images.These three scenes are those that are shown in previous slides as well.As you can see, our method is the fastest in all three scenes, due to the fact that our sampling kernel in each resolution is small and we do not need to filter the final result, which is a relatively costly operation.
In conclusion, I believe we have overcome some quality limitations of SSAO while retaining all the positives.Our methods have a few drawbacks as well.First it uses more memory than the others.Secondly, for certain kinds of small and thin geometry such as leaves or chair legs, some flickering artifacts can be seen when the camera moves, but since the objects are small anyway, this is not too noticeable.Finally, the use of multiple resolutions introduce some errors in certain cases, but again, utmost accuracy is not our aim here, so that’s not a really a problem.