We demonstrate real-time fast-motion tracking of an object in a 3D volume, while obtaining its precise XYZ co-ordinates.
Two separate scanning MEMS micromirror sub-systems track the object in a 20 kHz closed-loop. A demonstration system capable
of tracking full-speed human hand motion provides position information at up to 5m distance with 16-bit precision, or <=20μm
precision on the X and Y axes (up/down, left/right,) and precision on the depth (Z-axis) from 10μm to 1.5mm, depending on distance.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
Real-time Moving Object Detection using SURFiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Massive Sensors Array for Precision Sensingoblu.io
More than a billion smartphones being sold annually and growing with CAGR of 16%, the smartphone industry has become a driving force in the development of ultralow-cost inertial sensors. Unfortunately, these ultra low-cost sensors do not yet meet the needs of more demanding applications like inertial navigation and biomedical motion tracking systems. However, by adapting a wisdom of the crowd’s thinking and design arrays consisting of hundreds of sensing elements, one can capitalize on the decreasing cost, size, and power-consumption of the sensors to construct virtual high-performance low-cost inertial sensors. Team at KTH, Sweden and WUSTL, USA share findings and challenges.
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial.
The aim of this paper is to present the essential elements of the electro-optical imaging system EOIS for space applications and how these elements can affect its function. After designing a spacecraft for low orbiting missions during day time, the design of an electro-imaging system becomes an important part in the satellite because the satellite will be able to take images of the regions of interest. An example of an electro-optical satellite imaging system will be presented through this paper where some restrictions have to be considered during the design process. Based on the optics principals and ray tracing techniques the dimensions of lenses and CCD (Charge Coupled Device) detector are changed matching the physical satellite requirements. However, many experiments were done in the physics lab to prove that the resizing of the electro optical elements of the imaging system does not affect the imaging mission configuration. The procedures used to measure the field of view and ground resolution will be discussed through this work. Examples of satellite images will be illustrated to show the ground resolution effects.
Real-time Moving Object Detection using SURFiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Massive Sensors Array for Precision Sensingoblu.io
More than a billion smartphones being sold annually and growing with CAGR of 16%, the smartphone industry has become a driving force in the development of ultralow-cost inertial sensors. Unfortunately, these ultra low-cost sensors do not yet meet the needs of more demanding applications like inertial navigation and biomedical motion tracking systems. However, by adapting a wisdom of the crowd’s thinking and design arrays consisting of hundreds of sensing elements, one can capitalize on the decreasing cost, size, and power-consumption of the sensors to construct virtual high-performance low-cost inertial sensors. Team at KTH, Sweden and WUSTL, USA share findings and challenges.
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
An Experimental Study on a Pedestrian Tracking Deviceoblu.io
The implemented navigational algorithm of an inertial
navigation system (INS), along with the hardware configuration, decides its tracking performance. Besides, operating conditions also influence its tracking performance. The aim of this study is to demonstrate robust performance of a multiple Inertial Measurement Units (IMUs) based foot-mounted INS, The Osmium MIMU22BTP, under varying operating conditions. The device, which performs zero-velocity-update (ZUPT) aided navigation, is subjected to different conditions which could potentially influence gait of its wearer, its hardware configuration etc. The gait-influencing factors chosen for study are shoe type, walking surface, path profile and walking speed. Besides, the tracking performance of the device is also studied for different number of on-board IMUs and the ambient temperature. The tracking performance of MIMU22BTP is reported for all these factors and benchmarked using identified performance metrics. We observe very robust tracking performance of MIMU22BTP. The average relative errors are less than 3 to 4% under all the conditions, with respect to drift, distance and height, indicating a potential for a variety of location based services based on foot mounted inertial sensing and dead reckoning.
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...CSCJournals
This framework is a hand movement capturing method which could be done in three different depth levels. The algorithm has the capability of capturing and identifying when the hand is moving up, down, right and left. From these captured movements four signals could be generated. Moreover, when these hand movements are done, 15cm-75cm, 75cm-100cm, 100cm- 200cm from the camera (3 depth levels), twelve different signals could be generated. These generated signals could be used for applications such as game controlling (gaming).The existing method uses an object area based method for depth analysis. The results of the proposed work shows it has high accuracy compared to the existing method when tested for depth analysis.
Primal-Dual Coding to Probe Light Transport
Matthew O'Toole, Ramesh Raskar, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2012.
Abstract:
We present primal-dual coding, a photography technique that enables direct fine-grain control over which light paths contribute to a photo. We achieve this by projecting a sequence of patterns onto the scene while the sensor is exposed to light. At the same time, a second sequence of patterns, derived from the first and applied in lockstep, modulates the light received at individual sensor pixels. We show that photography in this regime is equivalent to a matrix probing operation in which the elements of the scene's transport matrix are individually re-scaled and then mapped to the photo. This makes it possible to directly acquire photos in which specific light transport paths have been blocked, attenuated or enhanced. We show captured photos for several scenes with challenging light transport effects, including specular inter-reflections, caustics, diffuse inter-reflections and volumetric scattering. A key feature of primal-dual coding is that it operates almost exclusively in the optical domain: our results consist of directly-acquired, unprocessed RAW photos or differences between them.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applicationsoblu.io
There are typically three categories of multi-sensor systems. First, classical sensors system with different types of collocated sensors, e.g. a positioning system making use of a collocated inertial sensor, a pressure sensor and a GPS. Second, sensor joint systems wherein multiple same type of sensors coordinate to predict state of a system, e.g. estimating motion of a robotic or a human arm using multiple sensors attached to different positions, for capturing a versatile motion. The third kind of multi sensors system consists of collocated sensors with the same properties. The redundancy due to multiple sensors, results not only in enhanced noise performance of the system, but also allows the multi sensor system to achieve what single sensor system can not, e.g. a two dimensional array of accelerometers on a rigid circuit board can produce rotational information. On the one hand enhancing capabilities, shrinking size and reducing cost of MEMS sensors favor redundancy, but on the other hand data communication, processing and calibration compensation pose system level challenges.
The talk focused on technical merits of such multi-sensor systems. Talk covered the architecture of massive multi-IMU arrays with up to 288 measurement channels at 1 kHz, the engineering challenges associated with them including the requirements on on-node data processing, their merits and some applications.
Compressive Light Field Photography using Overcomplete Dictionaries and Optim...Ankit Thiranh
In this paper, a design is proposed for a compressive light field camera which will allow to recover light fields with higher resolution from a single image. Also, various other useful applications for light field atoms are discussed, including 4D light field compression and denoising.
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously color and 3D.
Despite being around for almost two decades, footmounted inertial navigation only has gotten a limited spread. Contributing factors to this are lack of suitable hardware platforms and difficult system integration. As a solution to this, we present an open-source wireless foot-mounted inertial navigation module with an intuitive and significantly simplified dead reckoning interface. The interface is motivated from statistical properties of the underlying aided inertial navigation and argued to give negligible information loss. The module consists of both a hardware platform and embedded software. Details of the platform and the software are described, and a summarizing description of how to reproduce the module are given. System integration of the module is outlined and finally, we provide a basic performance assessment of the module. In summary, the module provides a modularization of the foot-mounted inertial navigation and makes the technology significantly easier to use.
Inertial Sensor Array Calibration Made Easy !oblu.io
Ultra-low-cost single-chip inertial measurement units (IMUs) combined into IMU arrays are opening up new possibilities for inertial sensing. However, to make these systems practical, calibration and misalignment compensation of low-cost IMU arrays are necessary and a simple calibration procedure that aligns the sensitivity axes of the sensors in the array is needed. Team at KTH suggests a novel mechanical-rotation-rig-free calibration procedure based on blind system identification and a platonic solid (Icosahedron) printable by a contemporary 3D-printer. Matlab-scripts for the parameter estimation and production files for the calibration device are made available.
An Experimental Study on a Pedestrian Tracking Deviceoblu.io
The implemented navigational algorithm of an inertial
navigation system (INS), along with the hardware configuration, decides its tracking performance. Besides, operating conditions also influence its tracking performance. The aim of this study is to demonstrate robust performance of a multiple Inertial Measurement Units (IMUs) based foot-mounted INS, The Osmium MIMU22BTP, under varying operating conditions. The device, which performs zero-velocity-update (ZUPT) aided navigation, is subjected to different conditions which could potentially influence gait of its wearer, its hardware configuration etc. The gait-influencing factors chosen for study are shoe type, walking surface, path profile and walking speed. Besides, the tracking performance of the device is also studied for different number of on-board IMUs and the ambient temperature. The tracking performance of MIMU22BTP is reported for all these factors and benchmarked using identified performance metrics. We observe very robust tracking performance of MIMU22BTP. The average relative errors are less than 3 to 4% under all the conditions, with respect to drift, distance and height, indicating a potential for a variety of location based services based on foot mounted inertial sensing and dead reckoning.
An Enhanced Computer Vision Based Hand Movement Capturing System with Stereo ...CSCJournals
This framework is a hand movement capturing method which could be done in three different depth levels. The algorithm has the capability of capturing and identifying when the hand is moving up, down, right and left. From these captured movements four signals could be generated. Moreover, when these hand movements are done, 15cm-75cm, 75cm-100cm, 100cm- 200cm from the camera (3 depth levels), twelve different signals could be generated. These generated signals could be used for applications such as game controlling (gaming).The existing method uses an object area based method for depth analysis. The results of the proposed work shows it has high accuracy compared to the existing method when tested for depth analysis.
Primal-Dual Coding to Probe Light Transport
Matthew O'Toole, Ramesh Raskar, and Kiriakos N. Kutulakos. ACM SIGGRAPH, 2012.
Abstract:
We present primal-dual coding, a photography technique that enables direct fine-grain control over which light paths contribute to a photo. We achieve this by projecting a sequence of patterns onto the scene while the sensor is exposed to light. At the same time, a second sequence of patterns, derived from the first and applied in lockstep, modulates the light received at individual sensor pixels. We show that photography in this regime is equivalent to a matrix probing operation in which the elements of the scene's transport matrix are individually re-scaled and then mapped to the photo. This makes it possible to directly acquire photos in which specific light transport paths have been blocked, attenuated or enhanced. We show captured photos for several scenes with challenging light transport effects, including specular inter-reflections, caustics, diffuse inter-reflections and volumetric scattering. A key feature of primal-dual coding is that it operates almost exclusively in the optical domain: our results consist of directly-acquired, unprocessed RAW photos or differences between them.
Evolution of a shoe-mounted multi-IMU pedestrian dead reckoning PDR sensoroblu.io
Shoe-mounted inertial navigation systems, aka pedestrian dead reckoning or PDR sensors, are being preferred for pedestrian navigation because of the accuracy offered by them. Such shoe sensors are, for example, the obvious choice for real time location systems of first responders. The opensource platform OpenShoe has reported application of multiple IMUs in shoe-mounted PDR sensors to enhance noise performance. In this paper, we present an experimental study of the noise performance and the operating clocks based power consumption of multi-IMU platforms. The noise performances of a multi-IMU system with different combinations of IMUs are studied. It is observed that four-IMU system is best optimized for cost, area and power. Experiments with varying operating clocks frequency are performed on an in-house four-IMU shoe-mounted inertial navigation module (the Oblu module). Based on the outcome, power-optimized operating clock frequencies are obtained. Thus the overall study suggests that by selecting a well-designed operating point, a multi-IMU system can be made cost, size and power efficient without practically affecting its superior positioning performance.
Abstract - Positioning is a fundamental component of human life to make meaningful interpretations of the environment. Without knowledge of position, human beings are like machines and have very limited capabilities to interact with the environment. Even machines in today’s world can be made smarter if positioning information is made available to them. Indoor positioning of pedestrians is the broad area considered in this thesis. A foot mounted pedestrian tracking device has been studied for this purpose. Systems which utilize foot mounted inertial navigation system has been in the literature for more than two decades. However very few real time implementations have been possible. The purpose of this thesis is to benchmark and improve the performance of one such implementation.
Multi Inertial Measurement Units (MIMU) Platforms: Designs & Applicationsoblu.io
There are typically three categories of multi-sensor systems. First, classical sensors system with different types of collocated sensors, e.g. a positioning system making use of a collocated inertial sensor, a pressure sensor and a GPS. Second, sensor joint systems wherein multiple same type of sensors coordinate to predict state of a system, e.g. estimating motion of a robotic or a human arm using multiple sensors attached to different positions, for capturing a versatile motion. The third kind of multi sensors system consists of collocated sensors with the same properties. The redundancy due to multiple sensors, results not only in enhanced noise performance of the system, but also allows the multi sensor system to achieve what single sensor system can not, e.g. a two dimensional array of accelerometers on a rigid circuit board can produce rotational information. On the one hand enhancing capabilities, shrinking size and reducing cost of MEMS sensors favor redundancy, but on the other hand data communication, processing and calibration compensation pose system level challenges.
The talk focused on technical merits of such multi-sensor systems. Talk covered the architecture of massive multi-IMU arrays with up to 288 measurement channels at 1 kHz, the engineering challenges associated with them including the requirements on on-node data processing, their merits and some applications.
Compressive Light Field Photography using Overcomplete Dictionaries and Optim...Ankit Thiranh
In this paper, a design is proposed for a compressive light field camera which will allow to recover light fields with higher resolution from a single image. Also, various other useful applications for light field atoms are discussed, including 4D light field compression and denoising.
Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously color and 3D.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
mems based optical coherence tomography imagingGayathri Pv
Mems based oct technique can be used to image cancerous tissue at its earlier stage because of its high resolution capability. oct principle can be used in endoscopes to image internal organs.
Presentation made by Prof. Adriano Camps (Universitat Politècnica de Catalunya) at ICMARS 2010 (India, 16-December-2010) on the MIRAS instrument aboard ESA's SMOS mission.
A MEMS BASED OPTICAL COHERENCE TOMOGRAPHY IMAGING SYSTEM AND OPTICAL BIOPSY P...Ping Hsu
A fully-functional, real-time optical coherence tomography (OCT) system based on a high-speed, gimbal-less micromachined scanning
mirror is presented. The designed MEMS control architecture allows the MEMS based imaging probes to be connected to a time-domain, a
Fourier domain or a spectral domain OCT system. Furthermore, a variety of probes optimized for specific laboratory or clinical
applications including various minimally invasive endoscopic, handheld or lab-bench mounted probes may be switched between effortlessly
and important driving parameters adjusted in real-time. In addition, artifact free imaging speeds of 33μs per voxel have been achieved
while imaging a 1.4mm×1.4mm×1.4mm region with 5μm×5μm×5μm sampling resolution (SD-OCT system.)
Similar to Fast and High-Precision 3D Tracking and Position Measurement with MEMS Micromirrors (20)
Closed Loop Control of Gimbal-less MEMS Mirrors for Increased Bandwidth in Li...Ping Hsu
we presented a low SWaP wirelessly controlled MEMS mirror
-
based LiDAR
prototype
which utilized an OEM
laser rangefinder for distance measurement
[1]
.
The MEMS mirror was run in open loop based on its e
xceptional
ly fast
design and high
repeatability performance.
However, to
further
extend the bandwidth and incorporate necessary eye
-
safety features, we recently focused on providing mirror position feedback and running the system in closed loop control.
Two-Axis Scanning Mirror for Free-Space Optical Communication between UAVsPing Hsu
We have developed a SOI/SOI wafer bonding process to design and fabricate two-axis scanning mirrors with excellent performance. These mirrors are used to steer laser beams in free-space optical communication between UAVs.
A fully-functional 2x2-element array of tip-tilt-piston micromirrors with large deflection angles and large piston range is
presented. A control system has also been developed which includes an extensive software package and a multi-channel highvoltage
amplifier that allows the user to independently control all available degrees of freedom (i.e. tip, tilt, or piston) for each
individual device in the array.
MEMS Mirror Based Dynamic Solid State Lighting ModulePing Hsu
Lighting in Homes will be programmable and designable
Commercial and home security systems will have steerable spotlights
UAVs and helicopter spotlights for search & rescue, security, etc.
MEMS BASED OPTICAL COHERENCE TOMOGRAPHY IMAGING SYSTEM AND OPTICAL BIOPSY PRO...Ping Hsu
A fully-functional, real-time optical coherence tomography (OCT) system based on a high-speed, gimbal-less micromachined scanning
mirror is presented. The designed MEMS control architecture allows the MEMS based imaging probes to be connected to a time-domain, a
Fourier domain or a spectral domain OCT system. Furthermore, a variety of probes optimized for specific laboratory or clinical
applications including various minimally invasive endoscopic, handheld or lab-bench mounted probes may be switched between effortlessly
and important driving parameters adjusted in real-time. In addition, artifact free imaging speeds of 33μs per voxel have been achieved
while imaging a 1.4mm×1.4mm×1.4mm region with 5μm×5μm×5μm sampling resolution (SD-OCT system.)
UAV-Borne LiDAR with MEMS Mirror Based Scanning Capability Ping Hsu
Firstly, we demonstrated a wirelessly controlled MEMS scan module with imaging and laser tracking capability which can be mounted and flown on a small UAV quadcopter. The MEMS scan module was reduced down to a small volume of <90mm><70mm><50g when powered by the UAV‟s battery. The MEMS mirror based LiDAR system allows for ondemand ranging of points or areas within the FoR without altering the UAV‟s position. Increasing the LRF ranging frequency and stabilizing the pointing of the laser beam by utilizing the onboard inertial sensors and the camera are additional goals of the next design. Keywords: MEMS Mirrors, laser tracking, laser imaging, laser range finder, UAV, drone, LiDAR.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Free Complete Python - A step towards Data Science
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Micromirrors
1. Fast and High-Precision 3D Tracking and Position Measurement with MEMS Micromirrors
Veljko Milanović and Wing Kin Lo
Mirrorcle Technologies, Inc.
828 San Pablo Ave., Ste. 109, Albany, CA 94706
veljko@mirrorcletech.com
Abstract - We demonstrate real-time fast-motion tracking of an object in a 3D volume, while obtaining its precise XYZ co-ordinates.
Two separate scanning MEMS micromirror sub-systems track the object in a 20 kHz closed-loop. A demonstration system capable
of tracking full-speed human hand motion provides position information at up to 5m distance with 16-bit precision, or <=20µm
precision on the X and Y axes (up/down, left/right,) and precision on the depth (Z-axis) from 10µm to 1.5mm, depending on distance.
INTRODUCTION
Obtaining real-time 3D co-ordinates of a moving object has
many applications such as gaming [1], robotics and human-
computer interaction applications [2-4], industrial applications
etc. Various technologies have been investigated for and used in
these applications, including sensing via wire-interfaces [2],
ultrasound, and laser interferometry. However a simple and low
cost solution that can provide enough precision and flexibility
has not been available. Recent proliferation of low-cost inertial
sensors has not addressed the problem of position tracking.
Cassinelli et al demonstrated a scanning mirror-based tracking
solution [3-4], however their system does not solve the problem
of object searching/selecting and does not have adequate depth
(Z-axis) measurements.
The objective of this work was to develop and demonstrate
an optical-MEMS based, very low cost and versatile platform
for tracking and position measurement in a variety of situations.
Use of MEMS mirrors [5] with potential for use of wide-angle
lenses provides the possibility of tracking in a very large
volume, and very far distances. E.g. use of remote-control IR
source-detector modules can provide a range of 50m or more.
MULTIPLE TRACKING OPTIONS
We have developed several beam-steering based techniques
to track an object inside a conic volume, as depicted in Fig. 1a.
A. Tracking a photo-detector or a retro-reflector
As depicted in Fig. 1b, there are two laser beams scanned
by two MEMS mirrors into a common volume. Both systems
are pointed in a parallel direction, but are spaced a known
distance d apart (Fig. 2a.) The devices run a spiral search
pattern from origin to maximum angles until they encounter a
photo-detector which synchronously relays its readings to the
control FPGA. From this point forward the devices renew a
search but with an updated origin at the last known position of
the photo-detector. The system is therefore in a perpetual search
mode, although only in a very small neighborhood of the photo-
detector. Full motion tracking (Fig. 2b) was achieved with fast
MEMS devices giving at least 2 kHz of motion bandwidth.
Since only one device can illuminate the target at a time, we
time-multiplex the sub-systems by laser modulation.
In our best-performing setup we use a quadrant photo-
detector which provides additional information for tracking,
specifically the needed adjustments in X and Y to get centered
on the target. Here there are clearly distinct modes: search
(spiraling) and tracking. Tracking is a proportional control
closed-loop based on the quad-detector X and Y inputs as loop
errors. We also implemented a small beam-motion dither on the
MEMS scanners that allows us to measure the tilt-orientation of
the quad-detector (Fig. 2c,) therefore giving us 4 DoF of the
detector and allowing us to use it at any rotational position.
In a nearly identical setup, we placed 2 photo-detectors in
close proximity with the MEMS mirrors. The object being
searched in the 3D volume is a retro-reflector (“cats eye”) or a
corner-cube reflector (both were used in our experiments.) In
this manner both devices can simultaneously illuminate the
target and operate independently.
B. Tracking an LED
As depicted in Fig. 1c, there is a photo-detector near each
one of the MEMS scanning units. An optical source such as a
near-IR LED is the target object that illuminates the
micromirrors. When the mirrors are properly pointed, that
illumination is reflected onto each detector. Therefore no time-
multiplexing or communication to the target is necessary.
3D POSITION MEASUREMENT
Both devices X and Y axes are driven by separate channels
of a 16-bit FPGA system. They achieve angle (negative and
positive) maxima (–θmax, + θmax) when the system sends –K to
+K to its output DAC, where K=215
-1. In most of our
experiments we calibrate our devices to provide θmax=10°, giving
a total scan angle of 20°. When device 1 successfully tracks the
target, the FPGA system records the angle of the device’s x-axis
and y-axis in terms of the open-loop output values OX1 and OY1.
Second device provides knowledge of its open-loop angles OX2
and OY2. The devices are level in y but spaced a known distance
d in x. Therefore when both devices are tracking the object they
see nearly identical Y readings OY1 and OY2, but due to motion
parallax the X readings are different and depend on the distance
of the object. We utilize the X readings to obtain a true distance
of the object to the origin (a point directly between the two
micromirrors) as:
( ) )(
1
tan 21max XX OO
Kd
Z
−
⋅
⋅
=
θ
.
With Z known, X and Y are found from known parameters
and by averaging from two devices’ readings:
( ) ( )
( ) ( )
)(
)(
2
2/tan)(
)(
)(
2
2/tan)(
21
12
max12
21
12
max12
XX
YY
YY
XX
XX
XX
OO
OOd
KZOOY
OO
OOd
KZOOX
−
+
=⋅⋅+=
−
+
=⋅⋅+=
θ
θ
.
RESULTS
Our MEMS devices provided pointing precision >= the
DAC’s 16-bit resolution, and therefore our overall system results
all demonstrated this 16-bit limitation. When target object was
not moving, no single digit of X,Y,Z was changing. Movements
of 1mm on an optical-bench micrometer were easily recorded at
5m distance. With the loop-gain and bandwidth capable of
tracking full-speed human hand motion, the system provides
position information at up to 5m distance with <=20µm
precision on the X and Y axes (up, down, left, right,) and
2. precision on the depth (Z-axis) from 10µm to 1.5mm, depending
on the distance. Precision can be greatly increased with slower
tracking settings and lower loop-gain in different applications.
[1] J. Brophy-Warren, “Magic Wand: How Hackers Make Use
Of Their Wii-motes,” The Wall Street Journal, Apr. 28th
, 2007.
[2] P. Arcara, et al, “Perception of Depth Information by Means
of a Wire-Actuated Haptic Interface,” Proc. of 2000 IEEE Int.
Conf. on Robotics and Automation, Apr. 2000.
[3] A. Cassinelli, et al, “Smart Laser-Scanner for 3D Human-
Machine Interface,” Int. Conf. on Human Factors in Computing
Systems, Portland, OR, Apr. 02 - 07, 2005, pp. 1138 - 1139.
[4] S. Perrin, et al, “Laser-Based Finger Tracking System
Suitable for MOEMS Integration,” Image and Vision
Computing, New Zealand, 26-28 Nov. 2003, pp.131-136.
[5] V. Milanović, et al, "Gimbal-less Monolithic Silicon
Actuators For Tip-Tilt-Piston Micromirror Applications," IEEE
J. of Select Topics in Quantum Electronics, vol. 10(3), Jun 2004.
Photo Detector
Lasers
y
d
z
x
MEMS
Mirror 1
MEMS
Mirror 2
X = 288.25 mm
Y = -167.48 mm
Z = 3500.7 mm
20°
FOV
MEMS Tracking System
IR Emitter
OR
Reflector
OR
Photo-Detector
(a) (b) (c)
Near IR
source
MEMS
Mirror 1
Photo Detectors
d
y
z
x
MEMS
Mirror 2
Figure 1. (a) Schematic diagram of 3D tracking of a hand-held object in a 3D volume. (b) Schematic of a 3D Tracking setup with
two beam-steering MEMS mirrors aiming their laser sources onto the target. (c) Schematic diagram of 3D tracking and
measurement setup with two MEMS devices steering incident light from a (near-IR) source onto their respective photo-detector.
Sensor Tilt
Sensor Z-Position (distance)
Sensor XY-Position
(a) (b) (c)
Figure 2. (a)Photograph of the two MEMS scanners and amplifiers. The devices are d=75mm apart and aimed in the same
direction. Each amplifier in the background is driven by the FPGA closed-loop controller. (b) A 2s long exposure photograph of
quad-detector tracking. Both laser spots are on the detector, and both devices successfully track the target. (c) GUI screen capture
showing the measured 4 DoF of the detector: position X [mm], position Y [mm], position Z [mm], and tilt of the quad-detector [deg.]
(a) (b) (c)
Figure 3. Gimbal-less dual-axis 4-quadrant devices used in this work: (a) typical device which reaches mechanical tilt from -8° to
+8° on both axes. Device has a 2mm mirror, this larger aperture being more suitable for the setup of Fig. 1c. (b) Voltage vs.
Mechanical tilt angle measurements of a typical 4-quadrant device, linearized by our 4-channel amplifier driving scheme. (c) Small-
signal characteristics of fast devices with 0.8mm mirror used in the setup of Fig. 1b, where larger aperture size is not required.