This document discusses two innovative hardware prototypes for spacecraft attitude determination: the Astrometric Alignment Sensor and SSNano. It investigates using common core algorithms for attitude determination with both sensors. These algorithms include the Pyramid Star Identification algorithm and Singular Value Decomposition (SVD) method. The SVD method was implemented and found to have simpler calculations, lower computation time, and more accurate results than alternatives like TRIAD. This allows for efficient attitude determination that can enable uses like formation flying and more accurate spacecraft pointing. Future work includes developing these sensors and algorithms for continuous, high-accuracy attitude information to enable new spacecraft missions.
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Ping Hsu
We demonstrate real-time fast-motion tracking of an object in a 3D volume, while obtaining its precise XYZ co-ordinates.
Two separate scanning MEMS micromirror sub-systems track the object in a 20 kHz closed-loop. A demonstration system capable
of tracking full-speed human hand motion provides position information at up to 5m distance with 16-bit precision, or <=20μm
precision on the X and Y axes (up/down, left/right,) and precision on the depth (Z-axis) from 10μm to 1.5mm, depending on distance.
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Ping Hsu
We demonstrate real-time fast-motion tracking of an object in a 3D volume, while obtaining its precise XYZ co-ordinates.
Two separate scanning MEMS micromirror sub-systems track the object in a 20 kHz closed-loop. A demonstration system capable
of tracking full-speed human hand motion provides position information at up to 5m distance with 16-bit precision, or <=20μm
precision on the X and Y axes (up/down, left/right,) and precision on the depth (Z-axis) from 10μm to 1.5mm, depending on distance.
SENSORS for attitude determination in SatellitesChaitanya Shukla
This ppt was made as a part of Video Assignment activity for 18AS741 in 7th sem, 2022-23 (BTech, Aerospace, Jain University) by Chaitanya Shukla (19BTRAS051).
This is not the best formatted or structured ppt. Should be used for minimalistic applications.
Space exploration is brewing to be one of the most sought after fields in today’s world with each country pooling in resources and skilled minds to be one step ahead of the other. The core aspect of space exploration is exoplanet exploration, i.e., by sending unmanned rovers or manned spaceships to planets and celestial bodies within and beyond our solar system to determine habitable planets. Landscape inspection and traversal is the core feature of any planetary exploration mission. It is often a strenuous task to carry out a machine learning experiment on an extraterrestrial surface like the Moon. Consequent lunar explorations undertaken by various space agencies in the last four decades have helped to analyze the nature of the Lunar Terrain through satellite images. The motion of the rovers has traditionally been governed by the use of sensors that achieve obstacle avoidance. In this project we aim to detect craters on the lunar landscape which in turn will be used to determine soft landing sites on the lunar landscape for exploring the terrain, based on the classified lunar landscape images.
ASTRONOMICAL OBJECTS DETECTION IN CELESTIAL BODIES USING COMPUTER VISION ALGO...csandit
Computer vision, astronomy, and astrophysics function quite productively together to the point where they are completely logical for each other. Out of computer vision algorithms the
progress of astronomy and astrophysics would have slowed down to reasonably a deadlock. The new researches and calculations can lead to more information as well as higher quality of data. Consequently, an organized view on planetary surfaces can change all in the long run. A new
discovery would be a puzzling complexity or a possible branching of paths, yet the quest to know more about the celestial bodies by dint of computer vision algorithms will continue. The detection of astronomical objects in celestial bodies is a challenging task. This paper presents
an implementation of how to detect astronomical objects in celestial bodies using computer vision algorithm with satisfactory performance. It also puts forward some observations linked
among computer vision, astronomy, and astrophysics.
Examination of Ship Object Recognition in High Determination Sar Metaphors Ba...ijtsrd
In demand to make up the defects of some prevailing ship object recognition systems for high determination synthetic aperture radar SAR images, a ship object recognition system centered on information theory and Harris corner recognition for SAR images is anticipated in this paper. At the outset, the SAR appearance is pretreated, and later, it is alienated into super pixel squares by consuming the upgraded simple direct iterative bunching super pixel generation algorithm. Then, the self statistics rate of the super pixel squares is deliberate, and the threshold T1 is fixed to hand picked the aspirant super pixel squares. And formerly, the prolonged vicinity biased statistics entropy progression level threshold T2 is set to exclude the false alarm aspirant super pixel squares. As a final point, the Harris corner detection algorithm is used to route the recognition outcome and the quantity of the corner threshold T3 is set to riddle out the false alarm squares, and the ultimate SAR image object recognition outcome is attained. The efficiency and supremacy of the recommended algorithm are certified by equating the recommended method with the outcomes of constant false alarm rate CFAR recognition algorithm shared with morphological handling algorithm and further ship object recognition algorithms. Akshara Jayanthan | Dr. G. Karpagarajesh "Examination of Ship Object Recognition in High-Determination Sar Metaphors Based on Information Theory and Harris Corner Detection Technique" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27972.pdfPaper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/27972/examination-of-ship-object-recognition-in-high-determination-sar-metaphors-based-on-information-theory-and-harris-corner-detection-technique/akshara-jayanthan
Object tracking with SURF: ARM-Based platform ImplementationEditor IJCATR
Several algorithms for object tracking, are developed, but our method is slightly different, it’s about how to adapt and implement such algorithms on mobile platform.
We started our work by studying and analyzing feature matching algorithms, to highlight the most appropriate implementation technique for our case.
In this paper, we propose a technique of implementation of the algorithm SURF (Speeded Up Robust Features), for purposes of recognition and object tracking in real time. This is achieved by the realization of an application on a mobile platform such a Raspberry pi, when we can select an image containing the object to be tracked, in the scene captured by the live camera pi. Our algorithm calculates the SURF descriptor for the two images to detect the similarity therebetween, and then matching between similar objects. In the second level, we extend our algorithm to achieve a tracking in real time, all that must respect raspberry pi performances. So, the first thing is setting up all libraries that the raspberry pi need, then adapt the algorithm with card’s performances. This paper presents experimental results on a set of evaluation images as well as images obtained in real time.
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
Automatic registration, integration and enhancement of india's chandrayaan 1 ...eSAT Journals
Abstract Chandrayaan-1 was India's first mission in deep space exploration to the moon. Its Terrain Mapping Camera (TMC) sent images of about 50% of total lunar surface in its limited lifetime and covered polar areas almost completely at a high resolution of 5m/pixel and 10m/pixel. This image dataset has been processed and put in public domain as individual strips of images categorized according to the orbits. The authors have already developed a Lunar GIS including a set of utilities like 3-D vision and exploration, crater detection and search using datasets from NASA's Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) which are of lower resolution than CH1. The objective of this paper is to normalize and register the Chandrayaan-1 images to existing processed data so that all these utilities can be transparently applied to high resolution Chandrayaan-1 datasets. Registration process consists of identification of features in source and target images and estimating appropriate correction for offset, rotation and scaling parameters. Furthermore, due to the low altitude orbit of satellite, the acquired images have displacement of pixels from actual nadir position, which need non-linear correction. This paper describes step by step technique to integrate these high and low resolution images in single framework. Keywords: Chandrayaan-1Lunar mapping, Moon, Feature based Image registration, Integration, ISRO, LRO, NASA, TMC, WAC.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
SENSORS for attitude determination in SatellitesChaitanya Shukla
This ppt was made as a part of Video Assignment activity for 18AS741 in 7th sem, 2022-23 (BTech, Aerospace, Jain University) by Chaitanya Shukla (19BTRAS051).
This is not the best formatted or structured ppt. Should be used for minimalistic applications.
Space exploration is brewing to be one of the most sought after fields in today’s world with each country pooling in resources and skilled minds to be one step ahead of the other. The core aspect of space exploration is exoplanet exploration, i.e., by sending unmanned rovers or manned spaceships to planets and celestial bodies within and beyond our solar system to determine habitable planets. Landscape inspection and traversal is the core feature of any planetary exploration mission. It is often a strenuous task to carry out a machine learning experiment on an extraterrestrial surface like the Moon. Consequent lunar explorations undertaken by various space agencies in the last four decades have helped to analyze the nature of the Lunar Terrain through satellite images. The motion of the rovers has traditionally been governed by the use of sensors that achieve obstacle avoidance. In this project we aim to detect craters on the lunar landscape which in turn will be used to determine soft landing sites on the lunar landscape for exploring the terrain, based on the classified lunar landscape images.
ASTRONOMICAL OBJECTS DETECTION IN CELESTIAL BODIES USING COMPUTER VISION ALGO...csandit
Computer vision, astronomy, and astrophysics function quite productively together to the point where they are completely logical for each other. Out of computer vision algorithms the
progress of astronomy and astrophysics would have slowed down to reasonably a deadlock. The new researches and calculations can lead to more information as well as higher quality of data. Consequently, an organized view on planetary surfaces can change all in the long run. A new
discovery would be a puzzling complexity or a possible branching of paths, yet the quest to know more about the celestial bodies by dint of computer vision algorithms will continue. The detection of astronomical objects in celestial bodies is a challenging task. This paper presents
an implementation of how to detect astronomical objects in celestial bodies using computer vision algorithm with satisfactory performance. It also puts forward some observations linked
among computer vision, astronomy, and astrophysics.
Examination of Ship Object Recognition in High Determination Sar Metaphors Ba...ijtsrd
In demand to make up the defects of some prevailing ship object recognition systems for high determination synthetic aperture radar SAR images, a ship object recognition system centered on information theory and Harris corner recognition for SAR images is anticipated in this paper. At the outset, the SAR appearance is pretreated, and later, it is alienated into super pixel squares by consuming the upgraded simple direct iterative bunching super pixel generation algorithm. Then, the self statistics rate of the super pixel squares is deliberate, and the threshold T1 is fixed to hand picked the aspirant super pixel squares. And formerly, the prolonged vicinity biased statistics entropy progression level threshold T2 is set to exclude the false alarm aspirant super pixel squares. As a final point, the Harris corner detection algorithm is used to route the recognition outcome and the quantity of the corner threshold T3 is set to riddle out the false alarm squares, and the ultimate SAR image object recognition outcome is attained. The efficiency and supremacy of the recommended algorithm are certified by equating the recommended method with the outcomes of constant false alarm rate CFAR recognition algorithm shared with morphological handling algorithm and further ship object recognition algorithms. Akshara Jayanthan | Dr. G. Karpagarajesh "Examination of Ship Object Recognition in High-Determination Sar Metaphors Based on Information Theory and Harris Corner Detection Technique" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27972.pdfPaper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/27972/examination-of-ship-object-recognition-in-high-determination-sar-metaphors-based-on-information-theory-and-harris-corner-detection-technique/akshara-jayanthan
Object tracking with SURF: ARM-Based platform ImplementationEditor IJCATR
Several algorithms for object tracking, are developed, but our method is slightly different, it’s about how to adapt and implement such algorithms on mobile platform.
We started our work by studying and analyzing feature matching algorithms, to highlight the most appropriate implementation technique for our case.
In this paper, we propose a technique of implementation of the algorithm SURF (Speeded Up Robust Features), for purposes of recognition and object tracking in real time. This is achieved by the realization of an application on a mobile platform such a Raspberry pi, when we can select an image containing the object to be tracked, in the scene captured by the live camera pi. Our algorithm calculates the SURF descriptor for the two images to detect the similarity therebetween, and then matching between similar objects. In the second level, we extend our algorithm to achieve a tracking in real time, all that must respect raspberry pi performances. So, the first thing is setting up all libraries that the raspberry pi need, then adapt the algorithm with card’s performances. This paper presents experimental results on a set of evaluation images as well as images obtained in real time.
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
Automatic registration, integration and enhancement of india's chandrayaan 1 ...eSAT Journals
Abstract Chandrayaan-1 was India's first mission in deep space exploration to the moon. Its Terrain Mapping Camera (TMC) sent images of about 50% of total lunar surface in its limited lifetime and covered polar areas almost completely at a high resolution of 5m/pixel and 10m/pixel. This image dataset has been processed and put in public domain as individual strips of images categorized according to the orbits. The authors have already developed a Lunar GIS including a set of utilities like 3-D vision and exploration, crater detection and search using datasets from NASA's Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) which are of lower resolution than CH1. The objective of this paper is to normalize and register the Chandrayaan-1 images to existing processed data so that all these utilities can be transparently applied to high resolution Chandrayaan-1 datasets. Registration process consists of identification of features in source and target images and estimating appropriate correction for offset, rotation and scaling parameters. Furthermore, due to the low altitude orbit of satellite, the acquired images have displacement of pixels from actual nadir position, which need non-linear correction. This paper describes step by step technique to integrate these high and low resolution images in single framework. Keywords: Chandrayaan-1Lunar mapping, Moon, Feature based Image registration, Integration, ISRO, LRO, NASA, TMC, WAC.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
NASA_OSSI_NandaS_ShockleyL_Final
1. MISSION ENGINEERING AND SYSTEM ANALYSIS
Code 596 GN&C Components and Hardware Systems Branch
Presenters Siddhant Nanda Cornell University Liberty Shockley University of Cincinnati
Mentors Alvin Yew, Ph.D. Mechanical Engineering Sean Semper, Ph.D. Aerospace Engineering
ABSTRACT
Star trackers are used in spacecraft missions due to their higher
accuracy attitude measurements than most other sensors.
However, exorbitant costs and their closed proprietary nature
have drastically limited their potential for more exotic uses and
configurations, especially as a component in a spacecraft’s
attitude control system (ACS). This project investigates the use
of core algorithms that are applicable to two innovative
hardware prototypes currently under development.
(1) Astrometric Alignment Sensor
A unique stellar sensor that captures sky and spacecraft imagery
and quickly processes images to produce vectors in orbit.
(2) SSNano
A compact, novel star scanning method for a “lost in space,”
tumbling spacecraft that uses brightness transit signatures as
opposed to traditional images to calculate attitude.
The core algorithms selected were the Pyramid Star
Identification and the Singular Value Decomposition (SVD),
which are used in conjunction for attitude determination. For
the attitude determination, different methods were explored,
including the well-known TRIAD Algorithm, Markley’s Fast
Quaternion and SVD methods. Ultimately, the SVD method was
chosen and implemented, after considering the drastically
simpler implementation, low computational time and accuracy
of the results.
How does this work together?
SSNANO AND THE PYRAMID ALGORITHM
The SSNano is a compact star scanner for spacecraft instruments that need sub-arcminute attitude information.
The traditional star tracker is replaced by a sensor that uses star detection to provide accurate attitude
information.
As the SSNano is still in a development phase, the gathering of sensory information was simulated. Using a
catalog of the 200 brightest stars, a lost in space scenario was randomly generated and visualized by placing the
coordinates of these stars on a virtual sphere as shown in Figure 1 below. To generate the transit signature for
this particular field, we perform a sweep across the 0 degree latitude line, and record the corresponding
brightness. The key distinction of this methodology from a normal star tracker is that these images are as a
function of time as opposed to space. We detect the centroid of a spike in the signature, but use temporal
measurements as opposed to spatial ones to do this, and the resulting signature can be shown in Figure 4. The
brightness corresponding to the centroid is then passed to our implementation of the Pyramid algorithm, the
heart of star pattern recognition.
The Pyramid algorithm is a highly robust method used to identify the stars observed by traditional star trackers
in the lost in space scenario5. The k-vector approach allows for an efficient, search-less method to obtain
cataloged stars that could possibly correspond to a particular measured pair, given an angle between two stars
and a precision5. The Pyramid builds on the identification of a four-star structure and uses a smart technique to
scan triangles that avoids unnecessary computation while simultaneously identifying and discarding “false
stars.” The algorithm implemented is outlined below, in Figure 5. By taking a cross product to the vectors to two
identified stars, the vector orthogonal to the plane of the spacecraft can be determined shown in Figure 6.
Advanced Star Tracker Development for Next Generation Attitude Determination
CONCLUSIONS and FUTURE RESEARCH
By implementing star identification, attitude determination
and control simulations, we lay the foundation for developing
advance architectures that work seamlessly with in-house
advance star-tracking hardware and software needed for future
innovative science missions.
Now that there is a functional routine for using an Astrometric
Alignment Sensor for attitude determination, it can be
implemented into formation flying missions, and developed
further to get even more accurate results. Further investigation
can be done on the sensor, to make it faster and more accurate
than it is now. The goal would be to have a continuous answer
for the attitude of the spacecraft.
With its compact, credit-card sized design, the SSNano has the
potential to be incredibly convenient to an assortment of
smallsat missions. The star ID of the SSNano and the
simulations developed helped to further the development
process for this prototype. Optimizing the attitude will
complete the star scanning process.
Another venture under development involves a VR simulation
environment of a star scanner to perform a hardware-in-the-
loop simulation. Star sensors will be put behind the Oculus Rift
optics to read star field patterns as we induce attitude
perturbations to a low-friction hemispherical air bearing
system. A closed loop control system will restore stability to the
system based on sensor feedback and reaction wheel spin up.
SINGULAR VALUE DECOMPOSITION (SVD) ALGORITHM
o Learn the linear algebra and geometry behind computations involving space vehicles
o Familiarize yourself with Wahba’s problem and ways to solve it
o Code in Matlab the TRIAD algorithm, Markley’s Fast Quaternion Attitude Determination method, and the
SVD method to determine the best method
While the original attitude determination using flight data used took an 1 hour and 20 minutes to go through all
the data and produce the results, the SVD algorithm takes 11 minutes, reducing runtime by 90%. This is due to
the simple calculations of the SVD algorithm3, opposed to the long original that included a weighted guess and
an optimization of Wahba’s Loss Function4. The new routine also searched every 5 rows and columns for bright
spots (opposed to every one) to analyze the image faster without sacrificing the number of bright spots found.
While these are good for efficiency, the SVD algorithm is also more accurate to the truth values of the star’s
positions. Results of 19,101 data points that represent images of the night sky taken by a ICESat-2 Laser
Reference System (LRS) every 0.1 seconds have less error when calculating the spacecraft’s true pointing Right
Ascension and Declination. This is because, unlike the optimization routine, it does not take an initial weighted
guess of where the spacecraft is looking.
This plot shows how the
script finds the stars
(bright spots) in an image
from the ICESat-2 LRS. It
looks for a certain pixel
value (for white) and then
puts a box around the
spot. From here, each star
can be centroided and its
location determined.
This is the output of
the code with the
SVD algorithm. It
shows the movement
of the stars tracked
across the FOV. This
is quite accurate to
the real movement.
2
1 2
The SVD method of attitude
determination can also be used in
formation flying missions, as
pictured below. The routine will
check to make sure spacecraft 2 is
in the FOV of spacecraft 1, then
scan the surrounding stars.
Spacecraft 1’s FOV
Side view of the formation of both
spacecraft and Spacecraft 1’s FOV
Detect Star
Transit
Signal
Processing
Take a
Picture
Image
Processing
Star ID:
Pyramid
Algorithm
Star Catalog
Star Search
Bright Object
Search
Attitude
Solution: SVD
Algorithm
Attitude
Solution:
TRIAD
Algorithm*
Astrometric Alignment Sensor
SSNano
This graphic demonstrates how
the two sensors start and end
with different information, but
employ a common core routine.
This core is shown with red
boxes, and is what both of our
projects worked to optimize.
Image Analysis
Star Movement
⍵
vector
Star 1
Star 2
Fig. 1, adapted from Mackison et al (1973)
Notional Operation of a Star Scanner
Fig. 2, adapted from Mackison et al (1973)
Star Pulses Recorded From Instrument
Fig.5, adapted from M.A. Samaan (2003)
Star Identification with the Pyramid Approach
Fig. 3, Simulated Star Scanner Swaths Fig. 4, Star Pulses from Simulated Swath
Fig. 6, Attitude Determination using Cross
Product