software engineering Chapter 5 System modeling.pptx
Positionit android app
1. INDEX
1.Why mobile media?
2.Special characteristics and requirements.
3.Demands for remote target positioning.
4. Positionit
5. Major components of positionit.
i. Single-image-based remote target localization.
ii. Two-image-based remote target localization.
iii. Video-based remote-moving-target tracking.
6. Performance.
7. Challenges : Mobile media .
8. Conclusion.
9.References.
2. Remote target-localization and tracking system
uses networked smartphones to localize a remote
target of interest.
System can switch to both standalone and network
mode.
To showcase the power of mobile media, we
present PositionIt, a mobile media-based remote
target-localization and tracking system.
3. Sensors in smartphones have enhanced
mobile technology.
GPS -> measures phone’s position
Digital Compass-> direction of the remote target
Digital Cameras -> used for distance estimation.
High-resolution camera increased
multimedia research opportunities.
4. Lightweight computing: Optimized usage of
battery power.
Performance tradeoffs : critical to achieve a good
balance between accuracy and complexity.
Interactive user interfaces : Smartphones provide
an interactive and friendly user interface that can
be leveraged to improve performance.
5. Military and commercial applications
Examples :
1. On the battlefield, soldiers can identify colleague.
2. A golfer who wants to estimate the distance to the
hole or
3. A hunter who needs to know the distance to a
target.
4. A tourist might wish to associate a remote object’s
GPS location (using Google Maps or Google Earth,
for example) when taking a picture so as to share
the local information with others.
6. Recently developed a image- and video-
based remote target localization and
tracking system on Android smartphones.
PositionIt - uses networked smartphones to
localize a remote target by fusing data
sensed by embedded sensors.
Important PositionIt feature - Computing is
done on the phone and thus runs offline.
7. Compass reading and GPS coordinates are
collected to track a remote object, by the PositionIt
System.
The user then captures an image or video of the
remote object to compute its distance/position.
System’s three main functional components:
o Single-image based localization
o Two-image based localization,
o Video-based remote moving target tracking.
POSITIONIT (cont…)
10. Locate remote objects with regular shapes.
the user is expected to know the object’s physical
height and width.
This is a camera-pose-estimation process.
The user’s input of the tight bounding box is an
important step for accurate distance estimation.
An object moving on a level surface the physical height
information alone is sufficient for object-position
estimation.
12. PositionIt can estimate its position given two
images of the remote target.
Two different phones or with one phone from two
different positions.
if only the remote target’s distance is of interest,
then it is sufficient to estimate the distance
between the two phones,
13. When the user collects the two images, he can
identify the remote target by inputting a
rectangular box enclosing the target on both
photos.
After which the application will perform image
feature extraction and matching, camera relative
position estimation, and remote target-position
estimation based on two-view triangulation.
If the two images are taken on different phones by
two different users, PositionIt utilizes Android
phone’s Bluetooth or WiFi net- work capabilities to
send images from one device to the other.
TWO-IMAGE-BASED (cont…)
15. The moving-target tracking function is available if the
remote target’s physical size is known.
Takes a short video clip of the moving object and then
simply draws a rough rectangular (with a finger touch
and hold)
The system then derives a tight bounding box around
the target object in all video frames.
With the help of the rectangle drawn by the user, the
system can achieve better accuracy and higher
efficiency when locating the tight bounding box.
16. The system then projects the moving object’s
position on the image plane to the real-world
coordinates.
While the video is being taken, the smart- phone’s
GPS coordinates and digital compass readings are
recorded so that the moving object’s trajectory can
be drawn on the map.
VIDEO-BASED (cont…)
19. Average accuracy of:
Single- image-based = ~90%
Two-image-based = ~83%
Limitations of two image based
◦ 1. Distance between the two cameras is an
estimated value based on the single-image-
based system, so errors will propagate during
the two-image-based estimation.
◦ 2. If the object is far away and its image is small,
the feature detection and matching accuracy
becomes a constraint.
20. The video-based tight-bounding-box extraction is
generally robust, so the video-based moving-
target tracking accuracy is close to the single-
image-based system.
PERFORMANCE(cont…)
21. accuracy of some embedded sensors such as the
GPS and digital compass is still limited.
Speed wise
◦ the single-image-based localization can generate the
result instantly after the tight bounding box is provided.
◦ For two- image-based localization, the application
generates the results within approximately 25 seconds after
some initial algorithm optimization.
◦ The video-based moving-target tracking runs at
approximately 5 to 7 frames per second (fps) after some
initial algorithm optimization.
22. Multimedia computing is a computationally
intensive task, but mobile devices are limited by
battery and computing power.
Multimedia computing could benefit from mobile
devices’ all-in-one embedded sensors
Mobile devices user-friendly interfaces can help
multimedia processing by lowering the complexity
and improving estimation accuracy.
Mobile multimedia systems will greatly contribute
to both commercial and military applications.
23. REFERENCES
1. H. Hile et al., ‘‘Landmark-Based Pedestrian Navigation with Enhanced
Spatial Reasoning,’’ Proc.7th Int’l Conf. Pervasive Computing,
Springer, 2009, pp. 5976.
2. F.X. Yu, R. Ji, and S.-F. Chang, ‘‘Active Query Sensing for Mobile
Location Search,’’ Proc. ACMInt’l Conf. Multimedia (ACM MM), ACM
Press, 2011, pp. 312.
3. J. Roters, X. Jiang, and K. Rothaus, ‘‘Recognition of Traffic Lights in
Live Video Streams on Mobile Devices,’’ IEEE Trans. Circuits and
Systems for Video Technology, vol. 21, no. 10, 2011, pp. 14971511.
4. Q. Wang et al., ‘‘PositionIt an Image-Based Remote Target
Localization System on Smartphones,’’ Proc. ACM Int’l Multimedia
Conf., ACM Press, 2011, pp. 821822.
5. Q. Wang et al., ‘‘Video Based Real-World Remote Target Tracking on
Smartphones,’’ to be published in Proc. IEEE Int’l Conf. Multimedia
and Expo, IEEE CS Press, 2012.