COSC 426 Lecture 5 on Mathematical Principles Behind AR Registration. Given by Adrian Clark from the HIT Lab NZ at the University of Canterbury, August 8, 2012
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: The Mathematics of 3D TriangulationDouglas Lanman
Build Your Own 3D Scanner:
The Mathematics of 3D Triangulation
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Moving Vehicle Detection from a Video, CCTV Footage etc. by Image Processing. The algorithm and the steps to be followed for detection is described in the presentation.
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
Build Your Own 3D Scanner:
Surface Reconstruction
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: The Mathematics of 3D TriangulationDouglas Lanman
Build Your Own 3D Scanner:
The Mathematics of 3D Triangulation
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: 3D Scanning with Swept-PlanesDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Swept-Planes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Moving Vehicle Detection from a Video, CCTV Footage etc. by Image Processing. The algorithm and the steps to be followed for detection is described in the presentation.
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: 3D Scanning with Structured LightingDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Structured Lighting
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
New geometric interpretation and analytic solution for quadrilateral reconstr...Joo-Haeng Lee
Accepted as poster presentation for ICPR 2014, Stockholm, Sweden on August 24~28, 2014.
[Revised Version]
Title: New geometric interpretation and analytic solution for quadrilateral reconstruction
Author: Joo-Haeng Lee
Affiliation: Human-Robot Interaction Research Team, ETRI, KOREA
Abstract:
A new geometric framework, called generalized coupled line camera (GCLC), is proposed to derive an analytic solution to reconstruct an unknown scene quadrilateral and the relevant projective structure from a single or multiple image quadrilaterals. We extend the previous approach developed for rectangle to handle arbitrary scene quadrilaterals. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. A completely unknown quadrilateral can be reconstructed from four views through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral, which accelerates a reconstruction process without relying on homography. The proposed method is easy to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples.
[Submitted Version]
Title: Generalized Coupled Line Cameras and Application in Quadrilateral Reconstruction
Abstract:
Coupled line camera (CLC) provides a geometric framework to derive an analytic solution to reconstruct an unknown scene rectangle and the relevant projective structure from a single image quadrilateral. We extend this approach as generalized coupled line camera (GCLC) to handle a scene quadrilateral. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. ...
Build Your Own 3D Scanner:
Course Notes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Camera Calibration from a Single Image based on Coupled Line Cameras and Rect...Joo-Haeng Lee
ICPR 2012 Paper Abstract
Title: Camera Calibration from a Single Image Based on Coupled Line Cameras and Rectangle Constraint
Author: Lee, Joo-Haeng (ETRI)
Scheduled for presentation during the Regular Session "Poster Shotgun (04): CV" (TuPSAT2), Tuesday, November 13, 2012, 08:30−09:00, Multi-Purpose Hall
21st International Conference on Pattern Recognition, November 11-15, 2012, Tsukuba International Congress Center, Tsukuba, Japan
This information is tentative and subject to change. Compiled on February 13, 2013
This project finds real objects dimensions from their images using OpenCV Java using two methods - Reference Method and Stereo Method.
Code for this project is available on my GitHub:
https://github.com/shyamabhuvanendran/VirtualRuler_CV
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTNooria Sukmaningtyas
According to SIFT algorithm does not have the property of affine invariance, and the high
complexity of time and space, it is difficult to apply to real-time image processing for batch image
sequence, so an improved SIFT feature extraction algorithm was proposed in this paper. Firstly, the MSER
algorithm detected the maximally stable extremely regions instead of the DOG operator detected extreme
point, increasing the stability of the characteristics, and reducing the number of the feature descriptor;
Secondly, the circular feature region is divided into eight fan-shaped sub-region instead of 16 square subregion
of the traditional SIFT, and using Gaussian function weighted gradient information field to construct
the new SIFT features descriptor. Compared with traditional SIFT algorithm, The experimental results
showed that the algorithm not only has translational invariance, scale invariance and rotational invariance,
but also has affine invariance and faster speed that meet the requirements of real-time image processing
applications.
This Algorithm is better than canny by 0.7% but lacks the speed and optimization capability which can be changed by including Neural Network and PSO searching to the same.
This used dual FIS Optimization technique to find the high frequency or the edges in the images and neglect the lower frequencies.
Study and Comparison of Various Image Edge Detection TechniquesCSCJournals
Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny’s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny’s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert’s operator
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
Build Your Own 3D Scanner:
Conclusion
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Build Your Own 3D Scanner: 3D Scanning with Structured LightingDouglas Lanman
Build Your Own 3D Scanner:
3D Scanning with Structured Lighting
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
New geometric interpretation and analytic solution for quadrilateral reconstr...Joo-Haeng Lee
Accepted as poster presentation for ICPR 2014, Stockholm, Sweden on August 24~28, 2014.
[Revised Version]
Title: New geometric interpretation and analytic solution for quadrilateral reconstruction
Author: Joo-Haeng Lee
Affiliation: Human-Robot Interaction Research Team, ETRI, KOREA
Abstract:
A new geometric framework, called generalized coupled line camera (GCLC), is proposed to derive an analytic solution to reconstruct an unknown scene quadrilateral and the relevant projective structure from a single or multiple image quadrilaterals. We extend the previous approach developed for rectangle to handle arbitrary scene quadrilaterals. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. A completely unknown quadrilateral can be reconstructed from four views through non-linear optimization. We also describe a improved method to handle an off-centered case by geometrically inferring a centered proxy quadrilateral, which accelerates a reconstruction process without relying on homography. The proposed method is easy to implement since each step is expressed as a simple analytic equation. We present the experimental results on real and synthetic examples.
[Submitted Version]
Title: Generalized Coupled Line Cameras and Application in Quadrilateral Reconstruction
Abstract:
Coupled line camera (CLC) provides a geometric framework to derive an analytic solution to reconstruct an unknown scene rectangle and the relevant projective structure from a single image quadrilateral. We extend this approach as generalized coupled line camera (GCLC) to handle a scene quadrilateral. First, we generalize a single line camera by removing the centering constraint that the principal axis should bisect a scene line. Then, we couple a pair of generalized line cameras to model a frustum with a quadrilateral base. Finally, we show that the scene quadrilateral and the center of projection can be analytically reconstructed from a single view when prior knowledge on the quadrilateral is available. ...
Build Your Own 3D Scanner:
Course Notes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
Camera Calibration from a Single Image based on Coupled Line Cameras and Rect...Joo-Haeng Lee
ICPR 2012 Paper Abstract
Title: Camera Calibration from a Single Image Based on Coupled Line Cameras and Rectangle Constraint
Author: Lee, Joo-Haeng (ETRI)
Scheduled for presentation during the Regular Session "Poster Shotgun (04): CV" (TuPSAT2), Tuesday, November 13, 2012, 08:30−09:00, Multi-Purpose Hall
21st International Conference on Pattern Recognition, November 11-15, 2012, Tsukuba International Congress Center, Tsukuba, Japan
This information is tentative and subject to change. Compiled on February 13, 2013
This project finds real objects dimensions from their images using OpenCV Java using two methods - Reference Method and Stereo Method.
Code for this project is available on my GitHub:
https://github.com/shyamabhuvanendran/VirtualRuler_CV
Improved Characters Feature Extraction and Matching Algorithm Based on SIFTNooria Sukmaningtyas
According to SIFT algorithm does not have the property of affine invariance, and the high
complexity of time and space, it is difficult to apply to real-time image processing for batch image
sequence, so an improved SIFT feature extraction algorithm was proposed in this paper. Firstly, the MSER
algorithm detected the maximally stable extremely regions instead of the DOG operator detected extreme
point, increasing the stability of the characteristics, and reducing the number of the feature descriptor;
Secondly, the circular feature region is divided into eight fan-shaped sub-region instead of 16 square subregion
of the traditional SIFT, and using Gaussian function weighted gradient information field to construct
the new SIFT features descriptor. Compared with traditional SIFT algorithm, The experimental results
showed that the algorithm not only has translational invariance, scale invariance and rotational invariance,
but also has affine invariance and faster speed that meet the requirements of real-time image processing
applications.
This Algorithm is better than canny by 0.7% but lacks the speed and optimization capability which can be changed by including Neural Network and PSO searching to the same.
This used dual FIS Optimization technique to find the high frequency or the edges in the images and neglect the lower frequencies.
Study and Comparison of Various Image Edge Detection TechniquesCSCJournals
Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny’s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny’s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert’s operator
DTAM: Dense Tracking and Mapping in Real-Time, Robot vision GroupLihang Li
This is the slides about DTAM for my group meeting report, hope it does help to anyone who will want to implement DTAM and need to understand it deeply.
Determination of System Geometrical Parameters and Consistency between Scans ...David Scaduto
Digital breast tomosynthesis (DBT) requires precise knowledge of acquisition geometry for accurate image reconstruction. Further, image subtraction techniques employed in dual-energy contrast-enhanced tomosynthesis require that scans be performed under nearly identical geometrical conditions. A geometrical calibration algorithm is developed to investigate system geometry and geometrical consistency of image acquisition between consecutive digital breast tomosynthesis scans, according to requirements for dual-energy contrast-enhanced tomosynthesis. Investigation of geometrical accuracy and consistency on a prototype DBT unit reveals accurate angular measurement, but potentially clinically significant differences in acquisition angles between scans. Further, a slight gantry wobble is observed, suggesting the need for incorporation of gantry wobble into image reconstruction, or improvements to system hardware.
IAA-LA2-10-01 Spectral and Radiometric Calibration Procedure for a SWIR Hyper...Christian Gabriel Gomez
Presentación para 2nd IAA Latin American Symposium on Small Satellites.
Procedimiento de calibración espectral y radiométrica de una cámara hiperespectral SWIR.
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
A talk given by Mark Billinging in the CLIPE workshop in Tubingen, Germant on April 27th 2023. This talk describes how virtual avatars can be used to support remote collaboration.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 3 in the 2022 COMP 4010 lecture series on AR/VR. This lecture provides an introduction for AR Technology. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 1 for the 2022 COMP 4010 course on AR and VR. This course was taught by Mark Billinghurst at the University of South Australia in 2022. This lecture provides an introduction to AR, VR and XR.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Lecture given by Mark Billinghurst on June 18th 2022 about how the Metaverse can be used for corporate training. In particular how combining AR, VR and other Metaverse elements can be used to provide new types of learning experiences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
2. Registra'on
• We
wish
to
calculate
the
transforma'on
from
the
camera
to
the
object
(extrinsic
parameters).
In
order
to
this,
we
must
find
the
transforma'on
from
the
camera
to
the
image
plane
(camera
intrinsics),
and
combine
that
with
the
transforma'on
from
known
points
in
the
object
to
their
loca'ons
in
the
image
plane.
3. Object
to
Image
Plane
• The
calcula'on
for
the
point
on
image
plane
(px,py)
is
related
to
the
ray
passing
from
object
(Px,Py,Pz)
through
the
camera
focal
point
and
intersec'ng
the
image
plane
at
focal
length
f,
such
that:
4. Object
to
Image
Plane
• The
previous
formulas
can
be
represented
in
matrix
form
as:
(equa'on
is
non-‐linear
–
s
is
scale
factor)
• Previous
equa'ons
have
been
assuming
a
perfect
pinhole
aperture.
Instead
we
have
a
lens,
which
has
a
principal
point
(up,
vp)
–
the
transforma'on
from
camera
origin
to
image
plane
origin
–
and
scale
factor
(sx,sy)
pixel
distance
to
real
world
units
(mm).
6. Camera
Calibra'on
• Knowing
the
camera
intrinsics
we
can
calculate
the
transforma'on
from
an
object
P
to
a
pixel
(u,v).
• During
the
process
of
calibra'on
we
calculate
the
intrinsics.
• This
is
done
by
taking
mul'ple
images
of
a
planar
chessboard
where
each
square
is
a
known
size.
7. Camera
Calibra'on
• If
we
assume
the
z
value
of
each
point
on
the
chessboard
to
be
at
0,
then
the
transforma'on
is
found
as:
For
each
point
there
is
a
Homography
mapping
P
to
(u,v):
8. Camera
Calibra'on
• Through
some
deriva'on
and
subs'tu'on,
we
find:
With
the
homography
represented
as:
The
matrix:
Mul'plies
with
the
H
vector
to:
9. Camera
Calibra'on
• With
at
least
four
pairs
of
point
correspondences,
we
can
solve:
using
Singular
Value
Decomposi'on
for
total
least
square
minimiza'on.
From
the
homography
of
these
four
points,
the
values
of
(u,v),
s,
(sx,sy)
can
be
es'mated
with
a
bit
more
maths.
(Zhang,
Z.:
2000,
A
flexible
new
technique
for
camera
calibra'on,
IEEE
Transac'ons
on
Paern
Analysis
and
Machine
Intelligence
22,
1330–1334.)
10. Camera
Calibra'on
• Once
we
have
the
camera
calibra'on,
we
can
go
ahead
and
compute
the
extrinsic
parameters
(transforma'on)
as:
Now
that
we
know
the
complete
transforma'on,
we
can
op'mise
our
intrinsic
parameters
using
the
Levenberg-‐Marquardt
Algorithm
on:
We
can
also
calculate
radial
distor'ons
of
the
lens
and
remove
them
if
we
feel
so
inclined,
and
further
op'mise.
12. Camera
Calibra'on
• Camera
Parameters
1. Perspec've
Projec'on
Matrix
2. Image
Distor'on
Parameters
• Two
camera
calibra'on
methods
1. Accurate
2
step
method
2. Easy
1
step
method
13. Easy
1
step
method:
'calib_camera2.exe'
• Finds
all
camera
parameters
including
distor'on
and
perspec've
projec'on
matrix.
• Doesn’t
require
careful
setup.
• Accuracy
is
good
enough
for
image
overlay.
(Not
good
enough
for
3D
measurement.)
14. Using
'calib_dist2.exe'
Selecting dots with mouse Getting distortion parameters
by automatic line-fitting
• Take pattern pictures as large as possible.
• Slant in various directions with big angle.
• 4 times or more
15. Accurate
2
step
method
• Using
dot
paern
and
grid
paern
• 2
step
method
– 1)
Gedng
distor'on
parameters
–
calib_dist.exe
– 2)
Gedng
perspec've
projec'on
parameters
22. Registra'on
• We
now
have
a
reliable
model
of
the
cameras
intrinsic
parameters,
and
have
removed
any
radial
distor'on.
Now
it’s
just
a
maer
of
learning
some
points
in
a
marker,
and
then
searching
for
them
in
each
frame,
calcula'ng
the
extrinsic
parameters
as:
25. Ques'on:
Gedng
TCM
• Known
Parameters
– Camera
Parameter:
C
– Image
Distor'on
Parameters:
x0,
y0,
f,
s
– Coordinates
of
4
Ver'ces
in
Marker
Coordinates
Frame
• Obtained
Parameters
by
Image
Processing
– Coordinates
of
4
Ver'ces
in
Observed
Screen
Coordinates
• Goal
– Gedng
Transforma'on
Matrix
from
Marker
to
Camera
29. Es'ma'on
of
Transforma'on
Matrix
1st
step:
Geometrical
calcula'on
– Rota'on
&
Transla'on
2nd
step:
Op'miza'on
– Itera've
processing
• Op'miza'on
of
Rota'on
Component
• Op'miza'on
of
Transla'on
Component
30. Op'miza'on
of
Rota'on
Component
• Observed
posi'ons
of
4
ver'ces
• Calculated
posi'ons
of
4
ver'ces
– Posi'ons
in
marker
coordinates
Es'mated
transforma'on
matrix
&
Perspec've
matrix
– Ideal
screen
coordinates
Distor'on
func'on
– Posi'ons
in
observed
screen
coordinates
• Minimizing
distance
between
observed
and
calculated
posi'ons
by
changing
rota'on
component
in
es'mated
transforma'on
matrix
31. Search
Tcm
by
Minimizing
Error
• Op'miza'on
– Itera've
process
32. (2)
Use
of
es'ma'on
accuracy
arGetTransMat()
minimizes
the
'err'.
It
returns
this
minimized
'err'.
If
'err'
is
s'll
big,
Miss-‐detected
marker.
Use
of
camera
parameters
by
bad
calibra'on.
33. How
to
set
the
ini'al
condi'on
for
Op'miza'on
Process
• Geometrical
calcula'on
based
on
4
ver'ces
coordinates
– Independent
in
each
image
frame:
Good
feature.
– Unstable
result
(Jier
occurs.):
Bad
feature
• Use
of
informa'on
from
previous
image
frame
– Needs
previous
frame
informa'on.
– Cannot
use
for
the
first
frame.
– Stable
results.
(This
does
not
mean
accurate
results.)
• ARToolKit
supports
both
34. Two
types
of
ini'al
condi'on
1. Geometrical
calcula'on
based
on
4
ver'ces
in
screen
coordinates
double arGetTransMat( ARMarkerInfo *marker_info,
double center[2], double width,
double conv[3][4] );
2. Use
of
informa'on
from
previous
image
frame
double arGetTransMatCont( ARMarkerInfo *marker_info,
double prev_conv[3][4],
double center[2], double width,
double conv[3][4] );
35. Use
of
Inside
paern
• Why?
– Square
has
symmetries
in
90
degree
rota'on
• 4
templates
are
needed
for
each
paern
– Enable
the
use
of
mul'ple
markers
• How?
– Template
matching
– Normalizing
the
shape
of
inside
paern
– Normalized
correla'on
36.
Accuracy
vs.
Speed
on
paern
iden'fica'on
• Paern
normaliza'on
takes
much
'me.
• This
is
a
problem
when
using
many
markers.
• Normaliza'on
process.
Normalization Resolution convert
39. In
'config.h'
– #define
AR_PATT_SAMPLE_NUM
64
– #define
AR_PATT_SIZE_X
16
– #define
AR_PATT_SIZE_Y
16
Identification Accuracy Speed
Large size Good Slow
Small size Bad Fast
41. Natural
Feature
Registra'on
• There
are
three
steps
to
natural
feature
registra'on:
Find
reliable
points,
describe
points
uniquely,
match
points.
• There
are
heaps
of
exis'ng
natural
feature
registra'on
algorithms
(SIFT,
SURF,
GLOH,
Ferns…)
with
their
own
intricacies,
so
we
will
just
look
at
a
high
level
approach
42. How
NFR
Works
1. Find
feature
points
in
the
image.
2. In
order
to
differen'ate
each
feature
point,
create
a
descriptor
of
a
local
window
using
a
func'on.
3. Repeat
1
and
2
for
both
the
source,
or
“marker”
image,
as
well
as
the
current
frame.
4. Compare
all
features
in
marker
to
all
features
in
current
frame
to
find
closest
matches.
5. Use
matches
to
calculate
transforma'on
43. Feature
Detec'on
• Feature
detec'on
involves
finding
areas
of
an
image
which
are
unique
amongst
their
surroundings,
and
can
easily
be
iden'fied
regardless
of
changes
in
viewpoint.
• Good
feature
candidates
are
corners
and
points.
45. Feature
Descrip'on
• A
feature
point
has
0
dimensions,
and
as
such,
there
is
no
way
of
telling
them
apart.
• To
resolve
this,
a
window
surrounding
the
point
is
transformed
into
a
1
dimensional
array.
• The
window
is
examined
at
the
scale
the
point
was
found
at,
and
the
transforma'on
needs
to
allow
for
distor'on/deforma'on,
but
s'll
able
to
iden'fy
between
every
feature.
47. Feature
Matching
• A
marker
is
trained
when
the
features
and
descriptors
present
have
all
been
found.
• During
run'me,
this
process
is
performed
for
each
frame
of
video.
• The
descriptors
of
each
features
are
compared
between
the
marker
and
the
current
frame.
If
the
descriptors
of
two
features
are
similar
within
a
threshhold,
they
are
assumed
to
be
a
match.
49. Registra'on
• From
here,
we
can
op'onally
run
RANSAC
over
the
homography
calcula'on:
1. Pick
4
random
points,
find
homography
2. Test
homography
by
evalua'ng
other
points
3. If
p-‐HP<e,
Recompute
homography
with
all
inliers,
else
goto
1
• From
there
we
just
take
the
homography,
combine
it
with
the
camera
intrinsics
and
get
the
transforma'on
matrix.
51. NFR
Applica'ons
• Any
applica'on
using
marker
based
registra'on
can
also
be
achieved
using
NFR,
but
there
are
a
number
of
addi'onal
possibili'es.
• As
NFR
does
not
require
special
markings,
any
exis'ng
media
can
be
used
without
modifica'on,
e.g.
pain'ngs
in
museums,
print
media
adver'sements,
etc
52. NFR
Applica'ons
• NFR
is
especially
suited
to
applica'ons
where
there
is
another
“layer”
of
data
relevant
to
an
exis'ng
surface,
e.g.
Three
dimensional
overlays
of
map
data,
“MagicBooks”,
proposed
building
sites,
manufacturing
blue
prints,
etc
57. Mobile
NFR
Mobile
Augmented
Reality
is
becoming
extremely
popular
due
to
the
ubiquitous
nature
of
devices
with
cameras
and
displays.
The
processing
capabili'es
of
these
devices
is
improving,
and
natural
feature
registra'on
is
becoming
increasingly
feasible
with
the
design
of
NFR
algorithms
for
high
performance.
Wagner,
D.;
Reitmayr,
G.;
Mulloni,
A.;
Drummond,
T.;
Schmals'eg,
D.;
,
"Pose
tracking
from
natural
features
on
mobile
phones,"
Mixed
and
Augmented
Reality,
2008.
ISMAR
2008.
7th
IEEE/ACM
Interna?onal
Symposium
on
,
vol.,
no.,
pp.125-‐134,
15-‐18
Sept.
2008
58. Non-‐Rigid
NFR
Using
deforma'on
models,
non-‐rigid
planar
surfaces
can
be
registered,
and
their
shape
recovered.
Not
only
does
this
improve
registra'on
robustness,
but
also
allows
for
more
realis'c
rendering
of
augmented
content
J.
Pilet,
V.
Lepe't,
and
P.
Fua,
Fast
Non-‐Rigid
Surface
Detec'on,
Registra'on
and
Realis'c
Augmenta'on,
Interna'onal
Journal
of
Computer
Vision,
Vol.
76,
Nr.
2,
February
2008.
M.
Salzmann,
J.Pilet,
S.Ilic,
P.Fua,
Surface
Deforma'on
Models
for
Non-‐Rigid
3-‐-‐D
Shape
Recovery,
IEEE
Transac'ons
on
Paern
Analysis
and
Machine
Intelligence,
Vol.
29,
Nr.
8,
pp.
1481
-‐
1487,
August
2007.
59. Model
Based
Tracking
Using
a
known
three
dimensional
model
in
conjunc'on
with
edge/texture
informa'on,
three
dimensional
objects
can
be
tracked
regardless
of
view
point.
Model
based
tracking
also
improves
robustness
to
self
occlusion.
Reitmayr,
G.;
Drummond,
T.W.;
,
"Going
out:
robust
model-‐based
tracking
for
outdoor
augmented
reality,"
Mixed
and
Augmented
Reality,
2006.
ISMAR
2006.
IEEE/ACM
Interna?onal
Symposium
on,
pp.109-‐118,
22-‐25
Oct.
2006
L.
Vacched,
V.
Lepe't
and
P.
Fua,
Stable
Real-‐Time
3D
Tracking
Using
Online
and
Offline
Informa'on,
IEEE
Transac?ons
on
PaGern
Analysis
and
Machine
Intelligence,
Vol.
26,
Nr.
10,
pp.
1385-‐1391,
2004.
61. What
makes
good
NFR?
• In
order
for
a
natural
feature
registra'on
algorithm
to
work
well
it
must
be
robust
to
common
image
transforma'ons
and
distor'ons:
62. Feature
descriptor
robustness
• Feature
descriptors
are
vulnerable
to
transforma'ons
and
distor'ons,
with
the
excep'on
of
transla'on
and
scale,
which
are
handled
by
modifying
the
descriptor
window
to
match
the
scale
and
posi'on
the
feature
was
detected
at.
64. OPIRA
• The
Op'cal-‐flow
Perspec've
Invariant
Registra'on
Augmenta'on
is
an
algorithm
which
adds
perspec've
invariance
to
exis'ng
registra'on
algorithms
by
tracking
the
object
over
mul'ple
frames
using
op'cal
flow,
and
using
perspec've
correc'on
to
eliminate
the
effect
of
perspec've
distor'ons.
Clark,
A.,
Green,
R.
and
Grant,
R.:
2008,
Perspec've
correc'on
for
improved
visual
registra'on
using
natural
features.,
Image
and
Vision
Compu'ng
New
Zealand,
2008.
IVCNZ
2008.
23rd
Interna'onal
Conference,
pp.
1-‐6
66. OPIRA
Process
• Once
an
ini'al
frame
of
registra'on
occurs,
all
correct
points
used
for
registra'on
are
tracked
from
frame
t-‐1
to
t
using
sparse
op'cal
flow.
• The
transforma'on
is
calculated
for
frame
t
based
on
the
tracked
points
and
their
marker
posi'ons
as
matched
in
frame
t-‐1
67. OPIRA
Process
(Cont.)
• Using
the
inverse
of
the
transforma'on
computed
using
Op'cal
Flow,
the
frame
t
is
warped
to
match
the
posi'on
and
orienta'on
of
the
marker.
• The
registra'on
algorithm
is
performed
on
the
newly
aligned
frame.
Matches
are
found,
and
the
transforma'on
is
mul'plied
by
the
Op'cal
Flow
transform
to
realign
the
transforma'on
with
the
original
image.
69. Addi'onal
Benefits
• OPIRA
is
able
to
add
some
degree
of
scale
and
rota'on
invariance
to
exis'ng
algorithms,
by
transforming
the
object
to
match
it’s
marker
representa'on.
• Using
the
undistorted
image,
we
can
perform
background
subtrac'on
to
isolate
occluding
objects
for
pixel
scale
occlusion
in
Augmented
Reality.