Part of series on integrating space assets in airport management and operation.
Made as part of Ascend XYZ, Ammo https://artes-apps.esa.int/projects/ammo
Pleiades - satellite imagery - very high resolutionSpot Image
With the Pleiades constellation, comprising the Pleiades-1 and Pleiades-2 satellites, Spot Image is set to bring you satellite imagery at a resolution of 50 cm and with a footprint of 20 km x 20 km.
More information on http://www.spotimage.com/pleiades
Part of series on integrating space assets in airport management and operation.
Made as part of Ascend XYZ, Ammo https://artes-apps.esa.int/projects/ammo
Pleiades - satellite imagery - very high resolutionSpot Image
With the Pleiades constellation, comprising the Pleiades-1 and Pleiades-2 satellites, Spot Image is set to bring you satellite imagery at a resolution of 50 cm and with a footprint of 20 km x 20 km.
More information on http://www.spotimage.com/pleiades
Pillow - The python Image Processing Library provides histogram() method in the Image class to get a histogram of colors/bands present in the Image.
histogram() method provides a list of counts of pixels for each color.eg., Red, Blue, Green for an Image of mode "RGB"
"Outlier detection of point clouds generating from low-cost UAVs for bridge i...TRUSS ITN
Using an Unmanned Aerial Vehicle (UAV) for documentation and inspection of civil infrastructures has become increasingly popular. One area of interest is in bridge inspection as it holds the potential of being safer, more economical, and less disruptive, with respect to traffic flow. With 3D reconstruction method, structural deficiencies and 3D models can be obtained from a 3D point cloud generated from UAV imagery data. However, shadows and water reflectivity may affect the quality of the point cloud generated from images, which causes difficulty in data processing. This paper presents a detailed workflow of removing outlier data points through the statistical filter and geometric-based filter. The experimental results showed that the statistical filter gives the best performance.
Build 2017 - B8037 - Explore the next generation of innovative UI in the Visu...Windows Developer
Experience a new wave of UI design with the animations, effects, and transitions that are the platform building blocks in the Visual Layer. See how physics, depth, lighting, and unique materials allow you to create immersive and personalized experiences, optimized for the range of Windows devices.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
Detecting solar farms with deep learningJason Brown
Talk delivered at Free and Open Source Software for Geo North America 2019 (FOSS4GNA)
Large scale solar arrays or farms have been installed globally faster than can be reliably tracked by interested stakeholders. We have built a deep learning model with Sentinel 2 satellite imagery that allows us to create accurate, timely global maps of solar farms.
Tutorial presentation given as part of an ESRC UK Data Service user workshop providing an introduction to mapping 2011 UK census data as choropleth maps using online web applications and a discussion of the merits of choropleth maps compared with cartograms and dasymetric mapping.
Explanation of very simple methods for atmospheric corrections and an example adapted from a paper of the Dept. of Thermodynamics, University of Valencia, Spain.
The Open Backscatter Toolchain (OpenBST) project: towards an open-source and ...Giuseppe Masetti
Authors: G.Masetti, J-M.Augustin, M.Malik, C.Poncelet, X.Lurton, L.Mayer, G.Rice, M.Smith
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
Most ocean mapping surveys collect seafloor reflectivity (backscatter) along with bathymetry. While the consistency of bathymetry processed by commonly adopted algorithms is well established, surprisingly large variability is observed between the backscatter mosaics generated by different software packages when processing the same dataset. Such a situation severely limits the use of acoustic backscatter for quantitative analysis (e.g., monitoring seafloor change over time, or remote characterization of seafloor characteristics) and other commonly attempted tasks (e.g., merging mosaics from different origins).
Acoustic backscatter processing involves a complex sequence of steps, but inasmuch as commercial software packages mainly provide end-results, comparisons between those results offer little insight into where in the workflow the differences are generated. In addition, preliminary results of a software-inter-comparison working group have shown that each processing algorithm tends to adopt a distinct, unique workflow; this causes large disagreements even in the initial per-beam reflectivity values resulting from differences in basic operations such as snippet averaging and evaluation of flagged beams.
Far from ideal, this situation requires a clear shift from the past closed-source approach that has caused it. As such, the Open Backscatter Toolchain (OpenBST) project aims to provide the community with an open-source and metadata-rich modular implementation of a toolchain dedicated to acoustic backscatter processing. The long-term goal is not to create processing tools that would compete with available commercial solutions, but rather a set of open-source, community-vetted, reference algorithms usable by both developers and users for benchmarking their processing algorithms.
As a proof-of-concept, we present a prototype implementation with the key elements of the OpenBST approach:
• The data conversion from a native acquisition format (i.e., Kongsberg EM Series) to NetCDF-based data structures (components of the eXtensible Sounder Format) better suited to data exploration, processing and metadata coupling.
• A processing pipeline constituted by a set of interlocking, task-oriented tools simplifying their substitution with alternative approaches.
• The creation of final products (i.e., angular response curves and backscatter mosaics) capturing relevant acquisition and processing metadata.
Pillow - The python Image Processing Library provides histogram() method in the Image class to get a histogram of colors/bands present in the Image.
histogram() method provides a list of counts of pixels for each color.eg., Red, Blue, Green for an Image of mode "RGB"
"Outlier detection of point clouds generating from low-cost UAVs for bridge i...TRUSS ITN
Using an Unmanned Aerial Vehicle (UAV) for documentation and inspection of civil infrastructures has become increasingly popular. One area of interest is in bridge inspection as it holds the potential of being safer, more economical, and less disruptive, with respect to traffic flow. With 3D reconstruction method, structural deficiencies and 3D models can be obtained from a 3D point cloud generated from UAV imagery data. However, shadows and water reflectivity may affect the quality of the point cloud generated from images, which causes difficulty in data processing. This paper presents a detailed workflow of removing outlier data points through the statistical filter and geometric-based filter. The experimental results showed that the statistical filter gives the best performance.
Build 2017 - B8037 - Explore the next generation of innovative UI in the Visu...Windows Developer
Experience a new wave of UI design with the animations, effects, and transitions that are the platform building blocks in the Visual Layer. See how physics, depth, lighting, and unique materials allow you to create immersive and personalized experiences, optimized for the range of Windows devices.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
Detecting solar farms with deep learningJason Brown
Talk delivered at Free and Open Source Software for Geo North America 2019 (FOSS4GNA)
Large scale solar arrays or farms have been installed globally faster than can be reliably tracked by interested stakeholders. We have built a deep learning model with Sentinel 2 satellite imagery that allows us to create accurate, timely global maps of solar farms.
Tutorial presentation given as part of an ESRC UK Data Service user workshop providing an introduction to mapping 2011 UK census data as choropleth maps using online web applications and a discussion of the merits of choropleth maps compared with cartograms and dasymetric mapping.
Explanation of very simple methods for atmospheric corrections and an example adapted from a paper of the Dept. of Thermodynamics, University of Valencia, Spain.
The Open Backscatter Toolchain (OpenBST) project: towards an open-source and ...Giuseppe Masetti
Authors: G.Masetti, J-M.Augustin, M.Malik, C.Poncelet, X.Lurton, L.Mayer, G.Rice, M.Smith
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
Most ocean mapping surveys collect seafloor reflectivity (backscatter) along with bathymetry. While the consistency of bathymetry processed by commonly adopted algorithms is well established, surprisingly large variability is observed between the backscatter mosaics generated by different software packages when processing the same dataset. Such a situation severely limits the use of acoustic backscatter for quantitative analysis (e.g., monitoring seafloor change over time, or remote characterization of seafloor characteristics) and other commonly attempted tasks (e.g., merging mosaics from different origins).
Acoustic backscatter processing involves a complex sequence of steps, but inasmuch as commercial software packages mainly provide end-results, comparisons between those results offer little insight into where in the workflow the differences are generated. In addition, preliminary results of a software-inter-comparison working group have shown that each processing algorithm tends to adopt a distinct, unique workflow; this causes large disagreements even in the initial per-beam reflectivity values resulting from differences in basic operations such as snippet averaging and evaluation of flagged beams.
Far from ideal, this situation requires a clear shift from the past closed-source approach that has caused it. As such, the Open Backscatter Toolchain (OpenBST) project aims to provide the community with an open-source and metadata-rich modular implementation of a toolchain dedicated to acoustic backscatter processing. The long-term goal is not to create processing tools that would compete with available commercial solutions, but rather a set of open-source, community-vetted, reference algorithms usable by both developers and users for benchmarking their processing algorithms.
As a proof-of-concept, we present a prototype implementation with the key elements of the OpenBST approach:
• The data conversion from a native acquisition format (i.e., Kongsberg EM Series) to NetCDF-based data structures (components of the eXtensible Sounder Format) better suited to data exploration, processing and metadata coupling.
• A processing pipeline constituted by a set of interlocking, task-oriented tools simplifying their substitution with alternative approaches.
• The creation of final products (i.e., angular response curves and backscatter mosaics) capturing relevant acquisition and processing metadata.
Review of Use of Nonlocal Spectral – Spatial Structured Sparse Representation...IJERA Editor
Noise reduction may be a vigorous analysis area in image method due to its importance in up the quality of image for object detection and classification. Throughout this paper, we've got a bent to develop a skinny illustration based noise reduction methodology for the hyperspectral imaging , that depends on the thought that the non-noise part in associate discovered signal is sparsely rotten over a redundant lexicon whereas the noise part does not have this property. The foremost contribution of the paper is at intervals the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imaging into skinny illustration. Non-locality suggests that the self-similarity of image, by that a full image is partitioned into some groups containing similar patches. The similar patches in each cluster unit sparsely delineate with a shared set of atoms throughout a lexicon making true signal and noise extra merely separated. Sparse illustration with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imaging by victimization 3D blocks rather than 2-D patches for skinny secret writing, which collectively makes true signal and noise extra distinguished. Moreover, hyperspectral imaging has every signal-independent and signal-dependent noises, thus a mixed Poisson and man of science noise model is used. In order to create skinny illustration be insensitive to various noise distribution in numerous blocks, a variance-fitting transformation (VFT) is used to create their variance comparable, the advantages of the projected ways unit valid on every artificial and real hyperspectral remote sensing data sets.
Coastal erosion management using image processing and Node Oriented Programming AbdAllah Aly
Presentation of my Thesis with the title "Coastal erosion management using image processing and Node Oriented Programming" for achieving my Masters Degree in Computer and Automation Engineering from Siena University, Italy.
I will describe a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious mean measurement-model parameterization, we first rewrite the measurement equation by changing the integral variable from photon energy to mass attenuation, which allows us to combine the variations brought by the unknown incident spectrum and mass attenuation into a single unknown mass-attenuation spectrum function; the resulting measurement equation has the Laplace integral form. The mass-attenuation spectrum is then expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image. This algorithm alternates between a Nesterov’s proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden–Fletcher–Goldfarb–Shanno with box constraints (L- BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density-map NPG steps, we apply a step-size selection scheme that accounts for varying local Lipschitz constants of the objective function. I will discuss the biconvexity of the penalized NLL function and outline preliminary results on convergence of PG-BFGS schemes. Finally, I will present real X-ray CT reconstruction examples that demonstrate the performance of the proposed scheme.
Real-time path tracing using a hybrid deferred approach, GTC EUR 2017Thomas Willberger
We present our real-time global illumination approach that is used in our architecture visualization software Enscape (www.enscape3d.com). It consists of a hybrid approach of using screen space information where possible (via raymarching) and global BVH ray traversal where the SS information is not sufficient.
GTC Europe 2017, Thomas Schander and Clemens Musterle
https://telecombcn-dl.github.io/2017-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
Robust Digital Watermarking Scheme of Anaglyphic 3D for RGB Color ImagesCSCJournals
In this paper, digital watermarking technique using spread spectrum (SS) technology and adaptive DM (dither modulation) with the improved Watson perception model are applied for copyright protection of anaglyphic 3D images. The improved Watson perception model can well solve the problem that the slack do not change linearly as the amplitude scale. Experimental results show that the watermarking schemes provide resistance to Gaussian noise, salt and pepper noise, JPEG compression, constant luminance change and valumetric scaling; the scheme employing improved Watson perception model is better than the one using unimproved Watson perception model. Compared experiments with the works [4] and [19] were also carried out in experiments. On the other hand, the approach is not sensitive to the JPEG compression while the other based on QIM is not sensitive to constant luminance change and valumetric scaling.
Sander Dieleman - Generating music in the raw audio domain - Creative AI meetupLuba Elliott
This talk by Sander Dieleman from DeepMind on "Generating music in the raw audio domain" was presented on 10th September 2018 at IDEA London as part of the Creative AI meetup.
Noise Removal in SAR Images using Orthonormal Ridgelet TransformIJERA Editor
Development in the field of image processing for reducing speckle noise from digital images/satellite images is a challenging task for image processing applications. Previously many algorithms were proposed to de-speckle the noise in digital images. Here in this article we are presenting experimental results on de-speckling of Synthetic Aperture RADAR (SAR) images. SAR images have wide applications in remote sensing and mapping the surfaces of all planets. SAR can also be implemented as "inverse SAR" by observing a moving target over a substantial time with a stationary antenna. Hence denoising of SAR images is an essential task for viewing the information. Here we introduce a transformation technique called ―Ridgelet‖, which is an extension level of wavelet. Ridgelet analysis can be done in the similar way how wavelet analysis was done in the Radon domain as it translates singularities along lines into point singularities under different frequencies. Simulation results were show cased for proving that proposed work is more reliable than compared to other de-speckling processes, and the quality of de-speckled image is measured in terms of Peak Signal to Noise Ratio and Mean Square Error
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
Multiple volumetric datasets
1. Ray Casting of Multiple Volumetric
Datasets with Polyhedral Boundaries on
Manycore GPUs
SIGGRAPH ASIA 2009
2010/03/31
ked
2. Authors
Bernhard Kainz
PhD student
Markus Grabner
Assistant professor
Alexander Bornik
Senior researcher
Stefan Hauswiesner
Research assistant
Judith Muehl
Senior researcher
Dieter Schmalstieg
Full professor
4. Properties
Real-time frame-rates
Many volumes
Arbitrary polyhedral geometry
CUDA implementation
A 20k polygons dragon with a 2563
brain volume
and nine 643
smoke clouds
20. Triangle rasterization
y=0, n1(y)=1
y=1, n1(y)=1
y=2, n1(y)=1
y=3, n1(y)=2
Use r0
y=0, r(y)=0x0001
y=1, r(y)=0x0010
y=2, r(y)=0x0100
y=3, r(y)=0x3000
For a >=0 For a < 0
21. Triangle rasterization
y=0, n1(y)=1
y=1, n1(y)=1
y=2, n1(y)=1
y=3, n1(y)=2
Use r0
y=0, r(y)=0x0001
y=1, r(y)=0x0010
y=2, r(y)=0x0100
y=3, r(y)=0x3000
For a >=0 For a < 0
22. Depth sorting
Use 63 entries to keep depth order
A entry contains
Z-value
Triangle ID
23. Depth sorting + evaluation
Use 63 entries to keep depth order
A entry contains
Z-value
Triangle ID
81 depth data Their result Ground truth Errors > 5%
我們重複一下他們的重點,他們做出了一個 real-time 的成像系統,這個系統可以一次繪製多個 volume data ,也可以繪製多邊形資料,下面這個圖就是他們提供的一個範例,這隻龍是多邊形資料,這個腦袋是一個 volume data ,這些煙則有九個 volume data 。然後這個系統的核心是用 CUDA 來實做的,以前 volume rendering 大多是用 shader 來寫,但是 volume data 多的時候, shader code 的 unrolling 會變得太大,而 CUDA 則不會有這個問題。
這是我的 outline ,首先我會先解釋 volume rendering 跟 ray casting ,然後我會說明他們根據距離遠近,來繪製多邊形結構的方法,最後我會介紹他們的結果還有結論。
首先是 volume rendering 。
正常情況下當我們看向一團 volume data 的時候,得到的顏色是光線在 volume 中不斷反射之後的累積。各位可以想想前一陣子沙塵暴最嚴重的時候看出去的那一團霧茫茫的景象。
那如果 volume data 的粒子不互相反射,而只有發光和吸收的效應,就像下圖所表示的, volume data 的成像就是粒子的顏色沿著視線累加的結果,在這種情況下 volume rendering 就會像 x 光照射的成像結果,就像 video 展示的,他們的 volume rendering 就是用下圖的方法繪製,這個方式就叫做 ray casting 。
對於發光和吸收的效應,我們做一個簡單的公式推導,一個光強度為 c 的粒子,他到達眼睛的光強度是這段距離內吸收係數的指數積分。
所以要計算沿著視線光強度的累加,就是把所有距離的光強度積分起來。
他的離散式可以寫成右邊這個式子, c 是粒子的光強度, a 是粒子的不透明度,所以 1 減 aj 的乘積就是這段距離的透光率, ci 成以透光率就是每個例子貢獻的光強度,這些光強度的總合就是最後的結果。
這條式子又可以化簡成根據粒子距離由近到遠的疊代式, c’ 跟 a’ 是顏色和不透明度的累加結果, c 跟 a 是新加入粒子的資料,這條公式告訴我們,光強度的累積,就是由近到遠將之前強度的累加結果加上新粒子的強度乘以先前的透光度;而不透明度的累積,則是先前的不透明度加上先前的透光度乘以新的不透明度。
我們再用圖示做一個 ray casting 的總結,從 view plane 的每個 pixel 出發,沿著視線的方向進行取樣,然後由近到遠依照上述的公式做累加就可以了,右下角的圖是兩個 volume data 繪製的結果。