ABSTRACT
As the techniques of creating augmented reality become more advanced, there is
an increase in its use to create engaging digital teaching materials or a more efficient
e-Learning environment. Recently, most relevant applications are limited to
marker-based augmented reality techniques. However, markerless augmented reality
techniques are more flexible and not limited by the use of markers, providing a
broader range of applications. Visual tracking techniques is a critical core technique
of augmented reality, and its use is often affected by the four factors of
environmental lighting, image angle recognition, image resolution and image texture
recognition. On the other hand, in a “markerless augmented reality e-Learning
system”, the complexity and moving speed of the tracked object will also strongly
affect tracking recognition. Only with high quality object identification and tracking
abilities will the learners recognize objects and move them as they wish when
operating the system, making it more applicable for its purpose.
The purpose of this thesis is to provide recommendations on the use of
augmented reality techniques to create digital learning material and e-Learning
environments, and to analyze limiting factors in tracking techniques currently
employed in “markerless augmented reality e-Learning systems”. We also
recommend apply object tracking techniques for real-time tracking in which object
speed has reduced effects on tracking ability. The methods that this study suggests
will be able to raise the practicality and popularization of markerless augmented
reality systems on a technical basis. On the basis of application, the methods will
also allow people to understand the relevant issues in the creation of augmented
reality digital learning materials and e-Learning environments. For example, when
applied to the military equipment maintenance in the army of Taiwan, such digital
learning schemes can effectively reduce personnel training time and costs, helping to
vi
7.
improve maintenance qualityand hence the performance of military equipments.
When applied to training schemes of military tactics, the contents can be made
dynamic to increase learners’ motivation and interest.
Keywords: Marker-based Augmented Reality, Markerless Augmented Reality,
e-Learning, Virtuality Environment, Feature Tracking。
vii
1 0 0 0
R −Rt
q ≅ K �0 1 0 0� � T � = K[R |−Rt] = M
0 1
0 0 1 0 (2.20)
公式(2.20)中 K 為相機校正矩陣公式如(2.21),當中 Cx 與 Cy 分別代表 x 與
y 上的偏移量, 而 fx 與 fy 分別代表在真實的感光耦合元件(Charge Coupled
Devices, CCD)相機中,在物理意義上 X 方向與 Y 方向之量值不一定為 1 比 1,
而影像上的像素並非 1 比 1,則必須在 X 與 Y 方向各導入一個比例參數,故
fx 與 fy 分別代表比例參數的座標點;另在感光耦合元件相機中,物理意義上的
像素也不一定是矩形,故 s 則代表 X 軸與 Y 軸的歪斜參數。
fx s cx
K = �0 fy cy �
0 0 1 (2.21)
R −Rt
� 代表相機座標系統( Q c ),展開後公式(2.22),其中
0T 1
為世界座標系統三維座標點,t 為位移轉換代表攝影機在世界座標系統位置,
公式(2.20)中�
R 為旋轉矩陣代表攝影機之方向。
R1 R2 R3 t x Xm
R6 t y Ym
�R 4 R5 �� �
R7 R8 R9 t z Zm
0 0 0 1 1 (2.22)
38
參考文獻
[1] 吳聲毅,數位學習觀念與實作,學貫行銷股份有限公司,台北、台灣,第
3.2-3.3 頁,2008。
[2] Johnson, L., Levine, A., Smith, R., and Stone, S., “The 2010 Horizon Report,”
The NEW MEDIA CONSORT IUM and EDUCASE Learning Initiative,
California, U.S.A, pp. 9-12、21-24, 2010
[3] 薛凱文,“擴增實境應用探討—使用無標記技術”,嶺東科技大學數位媒體
研究所碩士論文,台中,2009。
[4] 鄒景平 數位學習最佳指引
, -數位學習概論 資策會數位教育研究所 台北、
, ,
台灣,第 1.2 頁-第 1.23 頁,2007。
[5] Johnson, L., Levine, A., and Smith, R., “The 2009 Horizon Report,” The NEW
MEDIA CONSORT IUM and EDUCASE Learning Initiative, California,
U.S.A, pp. 8-10, 2009。
[6] Paul Milgram, Haruo Takemura, Akira Utsumi, Fumio Kishino, “Augmented
reality:A class of display on the reality-virtuality continuum,” Telemanipulator
and Telepresence Technologies, SPIE Vol. 2351, pp. 282-292, 1994.
[7] 吳鴻譯,Steven K. Feiner 撰文,“擴增實境-虛擬實境的無限延伸”,科學人
雜誌,第 04 期,2002 年 06 月。
[8] Azuma, R. T.,“A survey of augmented reality,” Teleoperators and Virtual
Environments ,6,4, pp. 355-385,1997.
[9] Oliver Bimber. and Ramesh Raskar., Spatial Augmented Reality Merging Real
and Virtual Worlds, Wellesley, assachusetts, U.S.A ,A K Peters, Ltd ,Chap.1,
pp. 1-7, 2005
[10] ARToolket Maker, http://roarmot.co.nz/ar/。
[11] ARToolket Support Library, http://www.artoolworks.com/support/library/Cr
eating_and_training_new_ARToolKit_markers
55
69.
[12] ARToolket, http://www.hitl.washington.edu/artoolkit/。
[13]Georgia Tech Augmented Environments Lab,http://www.augmentedenviron
ments.org/lab/research/handheld-ar/arhrrrr/
[14] Andre´ Buchau , Wolfgang M. Rucker , Uwe Wo¨ssner and Martin Beck
er, “Augmented reality in teaching of electrodynamics, ” The International
Journal for Computation and Mathematics in Electrical and Electronic En
gineering, Vol. 28 No. 4, pp. 948-963,2009.
[15] Trond Nilsen, Julian Looser, “Tankwar Tabletop war gaming in augmented
reality,” In Proceedings of 2nd International Workshop on Pervasive Gam
ing Applications, 2005.
[16] Steve Henderson and Steve Feiner, “Evaluating the Benefits of Augmented
Reality for Task Localization in Maintenance of an Armored Personnel Carrier
Turret,” Proceeding of IEEE International Symposium on Mixed and Augmented
Reality (ISMAR '09), October 2009, pp. 135-144.
[17] TOTAL IMMERSION, http://www.t-immersion.com
[18] METAIO, http://www.metaio.com
[19] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of
object motion and behaviors,” IEEE Transactions on Systems, Man and
Cybernetics, Part C: Applications and Reviews, Vol. 34, pp. 334-352, 2004.
[20] B.K.P. Horn and B.G. Schunck, “Determining optical flow.,” Artificial
Intelligence, vol 17, pp. 185-203, 1981.
[21] 陳冠儒,“光流值準確性之提升”,私立逢甲大學電機工程研究所碩士論
文,台中,2007。
[22] B. D. Lucas and T. Kanade,“An Iterative Image Registration Technique
with an Application to Stereo Vision” Proceedings of Imaging Understan
ding Workshop , pp. 121-130, 1981.
56
70.
[23] Bradski, G.,Kaehler, A., “Learning OpenCV,” O'Reilly Media, September
2008.
[24] Bouguet, J.-Y. “Pyramidal Implementation of the Lucas Kanade Feature Tr
acker Description of the Algorithm”, Intel Corporation, Microprocessor Res
earch Labs., 1074-1082, 1999.
[25] Aires, K.R.T., Stantana A.M, Medeiros A. A. D., “Optical flow using col
or ingormation : preliminary results,” ACM symposium on Applied compu
ting table of contents, pp.1607-1611, 2008.
[26] A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and
tracking from real-time video,” in Proc. IEEE Workshop Applications of
Computer Vision, pp. 8–14 , 1998.
[27] Collins, R., Lipton, A., Kanade, T. , Fujiyoshi, H. , Duggins, D. , Tsin, Y. ,
Tolliver, D., Enomoto, N., Hasegawa, O., “A system for video surveillance and
monitoring,” Carnegie Mellon University of Robotics Institute of Technology
Report, U.S.A, 2000.
[28] 賴丙全,“利用多攝影機進行移動物三維定位及追蹤”,國立中央大學土木
工程研究所碩士論文,桃園,pp. 3-6,2007.
[29] J. C. S. Jacques, C. R. Jung, S. R. Musse, “Background subtraction and shadow
detection in grayscale video sequences, ” Proceedings of IEEE International
Conference on Computer Graphics And Image Processing, pp. 189-196, 2005.
[30] Qi Zang and Reinhard Klette, "Object Classification and Tracking in Video
Surveillance," Computer analysis of images and patterns, Vol. 2756, pp.
198-205, Aug. 2003.
[31] Gonzalez, C., Woods,E., “Digital Image Processing, ” Pearson Education ,Inc,
2008。.
[32] 姚文翰、蔡孟修、陳宜賢及王聖智,“物體追蹤系統簡介技術報導”,電
腦視覺監控產學研聯盟,桃園,No.2,2005。
57
71.
[33] D. Comaniciuand P. Meer, “Mean shift analysis and applications,” IEEE
Intern ational Conference on Computer Vision, vol. 2, pp.1197, 1999.
[34] Lowe, D.G. “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision, pp.91-110, 2004.
[35] Y. Ke and R. Sukthankar, “PCA-SIFT: A more distinctive representation for
local image descriptors,” In Proceedings of the Conference on Computer
Vision and Pattern Recognition, pp. 506-513, 2004.
[36] Mikolajczyk, K. and Schmid, C.“A performance evaluation of local
descriptors ”, IEEE Transactions on Pattern Analysis and Machine Intelligence,
Vol. 10, No. 27, pp.1615-1630, 2005.
[37] Morel, J.M. and Yu, G., “ASIFT: A New Framework for Fully Affine Invariant
Image Comparison,” SIAM Journal on Imaging Sciences, Vol. 2, Issue 2, 2009.
[38] Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “Speeded-Up
Robust Features (SURF), ” Computer Vision and Image Understanding,
pp.346-359, 2008.
[39] 韓承翰,“以降維 SURF 為基礎的手勢辨識”,國立臺灣科技大學電機工
程研究所碩士論文,台北,2010。
[40] J. Heikkila and O. Silven, “A Four-Step Camera Calibration Procedure with
Implicit Image Correction,” In Proc. of IEEE Computer Vision and Pattern
Recognition, pp. 1106-1112, 1997.
[41] SIFT Library,http://blogs.oregonstate.edu/hess/code/sift/
[42] OpenCV 2.2, http://sourceforge.net/projects/opencvlibrary/files/opencv-win/
2.2/
58