Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

CV_2015_tang

189 views

Published on

  • Be the first to comment

  • Be the first to like this

CV_2015_tang

  1. 1. Nick C. Tang | Curriculum Vitae ✉nickctang@gmail.com ☏+886-928-609387 Linkedin: https://goo.gl/SHIJ5o , Academia Sinica: http://goo.gl/t45ZzE EXPERIENCE 2015/04 ~ Present Visiting Scientist · RIKEN Center for Advanced Photonics, RIKEN, Japan · Advisor: Dr. Hideo Yokota · Focus Area · 4D cell data processing and expert knowledge transferring · Job description · Algorithm development 2009/03 ~ Present Postdoctoral Fellow · Institute of Information Science, Academia Sinica, Taiwan · Advisor: Dr. Hong-Yuan Mark Liao · Focus Area · Image and video processing, computer vision and machine learning · Job description · Team Leader, Video processing and Forensics Team · Do research, publish papers and propose new ideas · Algorithm development and Technology transfer · Propose and execute Ministry of Science and Technology project EDUCATION 2005/09 ~ 2008/06 Ph.D · Computer Science and Information Engineering, Tamkang University, Taiwan · Dissertation:Video Reprocessing via Motion Interpolation and Video Inpainting · Rank: 2nd / 10 , GPA: 4.0 2003/09 ~ 2005/06 M.S. · Computer Science and Information Engineering, Tamkang University, Taiwan · Dissertation:Multiple Object Tracking Techniques in Sports · Rank: 19th / 36 , GPA: 3.9 1999/09 ~ 2003/06 B.S. · Computer Science and Information Engineering, Tamkang University, Taiwan · Jin-Shen Scholarship : 2 times (in 1999, 2000) · Rank: 5th / 81 , GPA: 3.6 PROFESSIONAL SKILLS Research Area · Computer vision, Image processing, Video processing, Pattern recognition, Machine Learning Programming Language · C/C++, Matlab, Python, PHP, Javascript QUALIFICATION TOEIC · Reading & Listening : 775 / 990 · Writing : 140 / 200, · Speaking : 130 / 200
  2. 2. TECHNOLOGY TRANSFER Image Inpainting, NT$1,000,000, 10/2010 · National Archives and Records Administration – Aged Photo Restoration Video Inpainting, NT$1,000,000, 09/2011 · National Archives and Records Administration – Aged Film Restoration PROFESSIONAL ACTIVITIES Editorship · Guest Editor, Journal of Computers: Special Issue on Advances in Multimedia (2010) Referee for Journals · Reviewer, IEEE Transaction on Image Processing (2009–present) · Reviewer, Pattern Recognition Letters (2011-present) · Reviewer, Journal of Visual Communication and Image Representation (2011-present) · Reviewer, IEEE Signal Processing Magazine (2012-present) · Reviewer, IEEE Transaction on Multimedia (2012–present) · Reviewer, IEEE Transaction on Circuit Systems and Video Technology (2014-present) Invited Visits · Vising Scientist, RIKEN Center for Advanced Photonics, RIKEN, Japan (2015/01) · Visiting Scholar, School of Computer Engineering, Nanyang Technological University, Singapore, (2010/08) JOINTED PROJECTS 視訊鑑識之相關研究 (100-2221-E-001-013-MY3) 2011/08/01~2014/07/31 $3,274,000 數位技術研發與整合計畫 - 核心技術與工具研發計畫 (100-2631-H-001-013-) 2011/01/01 ~ 2012/03/31 $9,795,000 數位技術研發與整合計畫 - 數位技術研發與整合計畫 (100-2631-H-001-020-) 2010/01/01 ~ 2011/03/31 $4,650,000 數位技術研發與整合計畫 - 核心技術與工具研發計畫 (99-2631-H-001-020-) 2010/01/01 ~ 2011/03/31 $8,015,000 數位技術研發與整合計畫 - 數位技術研發與整合計畫 (99-2631-H-001-018-) 2010/01/01 ~ 2011/03/31 $4,321,000 結合異質特徵的視訊註解及擷取 (97-2217-E-001-001-MY3) 2008/08/01 ~ 2011/12/31 $3,165,000 數位典藏國家型計畫 - 經典影片之重生與再造 (97-2631-H-152-001-) 2008/08/01 ~ 2009/07/31 $398,000 影片中即時追蹤二維物件軌跡以轉換三維物件資訊 (94-2213-E-032-018-) 2005/08/01 ~ 2006/07/31 $475,000 影像瑕疵偵測與修補技術之研究 (94-2213-E-032-017-) 2005/08/01 ~ 2006/07/31 $398,000 PUBLICATIONS Journal [1] Pei-Chih Wen, Wei-Chih Cheng, Yu-Shuen Wang, Hung-Kuo Chu, Nick C. Tang, Hong-Yuan Mark Liao, "Court reconstruction for camera calibration in broadcast basketball videos," IEEE Transactions on Visualization and Computer Graphics, 2015. (SCI, 5-Year Impact factor(IF): 2.242) [2] Nick C. Tang, Yen-Yu Lin, Ju-Hsuan Hua, Shih-En Wei, Ming-Fang Weng and Hong-Yuan Mark Liao, "Robust Action Recognition via Borrowing Information across Image Modalities," IEEE Transactions on Image Processing, Vol. 24, Issue 2, pp.709-723, 2015. (SCI, 5 Year-IF: 3.925) [3] Nick C. Tang, Yen-Yu Lin, Ming-Fang Weng, and Hong-Yuan Mark Liao, "Cross-Camera Knowledge Transfer for Multiview People Counting," IEEE Transactions on Image Processing, Vol. 24, Issue 1, pp. 80- 93, 2015. (SCI, 5 Year-IF: 3.925)
  3. 3. [4] Nick C. Tang, Chiou-Ting Hsu, Tsung-Yi Lin, and Hong-Yuan Mark Liao, “Example-based Human Motion Extrapolation based on Manifold Learning,” IEEE Transactions on Multimedia, Vol. 16, pp. 47 - 59, January 2014. (SCI, 5 Year-IF: 2.344) [5] Nick C. Tang, Chiou-Ting Hsu, Chih-Wen Su, Timothy K. Shih, and Hong-Yuan Mark Liao, “Video Inpainting on Digitized Vintage Films via Maintaining Spatiotemporal Continuity,” IEEE Transactions on Multimedia, Vol. 13, Issue 4, pp. 602-614, July, 2011. (SCI, 5 Year-IF: 2.344) [6] Timothy K. Shih, Nick C. Tang, Joseph C. Tsai and Jenq-Neng Hwang, “Video Motion Interpolation for Special Effect Applications,” IEEE Transactions on Systems, Man, and Cybernetics, Part C, Vol. 41, pp. 720-732, August 2011. (SCI, 5 Year-IF: 2.428) [7] Timothy K. Shih, Nick C. Tang, Jenq-Neng Hwang, “Exemplar-based Video Inpainting without Ghost Shadow Artifacts by Maintaining Temporal Continuity”, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 19, Issue 2, March 2009. (SCI, 5 Year-IF: 2.549) [8] Y. T. Zhuang, Y. S. Wang, T. K. Shih, Nick C. Tang, “Patch-guided facial image inpainting by shape propagation,” Journal of Zhejiang University SCIENCE A, pp. 232-238, Feb. 2009. (SCI, IF: 0.608) [9] Lawrence Y. Deng, Nick C. Tang, Timothy K. Shih, Dong-Liang Lee, Yu-Hsin Cheng and Kuo-Yen Lo, “The Development of Image-based Distance Measurement System”, Journal of Internet Technology (JIT), Vol. 10, no.1, pp. 65-71, 2008. (SCI-E,EI) Conference [1] Shih-En Wei, Nick C. Tang, Yen-Yu Lin, Ming-Fang Weng, Hong-Yuan Mark Liao, "Skeleton- augmented Human Action Understanding by Learning with Progressively Refined Data," International Workshop on Human Centered Event Understanding from Multimedia in conjunction with ACM international conference on Multimedia, pp. 4608-4612, 2014. [2] Yen-Yu Lin, Ju-Hsuan Hua, Nick C. Tang, Min-Hung Chen and Hong-Yuan Mark Liao "Depth and Skeleton Associated Action Recognition without Online Accessible RGB-D Cameras," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2617-2624, 2014. [3] Nick C. Tang, Yen-Yu Lin, Ju-Hsuan Hua, Ming-Fang Weng, Hong-Yuan Mark Liao, "Human Action Recognition using Associated Depth and Skeleton Information," IEEE International Conference on Acoustics, Speech, and Signal Processing, 2014. [4] Ming-Fang Weng, Yen-Yu Lin, Nick C. Tang, and Hong-Yuan Mark Liao, "Visual Knowledge Transfer among Multiple Cameras for People Counting with Occlusion Handling," in Proceeding of 20th ACM international conference on Multimedia (ACM MM 2012),pp. 1-10, October 2012 [5] Nick C. Tang, Chiou-Ting Hsu, Tsung-Yi Lin, and Hong-Yuan Mark Liao, “Example-based Human Motion Extrapolation based on Manifold Learning”, in Proceeding of 19th ACM international conference on Multimedia (ACM MM 2011), pp. 1305-1308, 2011. [6] Nick C. Tang, Hsiao-Rong Tyan, Chiou-Ting Hsu, and Hong-Yuan Mark Liao, “Narrative Generation by Repurposing Digital Videos,” The 17th International Conference on MultiMedia Modeling, Lecture Notes in Computer Science, volume LNCS 6523, Springer, pages 503-513, January 2011. [7] Nick C. Tang, Hsing-Ying Zhong, Joseph C. Tsai, Timothy K. Shih and Hong-Yuan Mark Liao, " Motion Inpainting and Extrapolation for Special Effect Production," in Proceeding of 17th ACM international conference on Multimedia (ACM MM 2009), pp. 1037-1040, 2009. [8] Nick C. Tang, Timothy K. Shih, Hong-Yuan Mark Liao, Joseph C. Tsai and Hsing-Ying Zhong, "Motion Extrapolation for Video Story Planning," in Proceeding of 16th ACM international conference on Multimedia (ACM MM 2008), Vancouver, BC, Canada, October 27-31, 2008, pp.685-688. [9] Nick C. Tang, Timothy K. Shih, Hsing-Ying Zhong, Joseph C. Tsai and and Chin-Yao Tang, "Video Falsifying for Special Effect Production," in Proceeding of 16th ACM international conference on Multimedia (ACM MM 2008), Vancouver, BC, Canada, October 27-31, 2008, pp. 1101-1102 [10] Timothy K. Shih, Nick C. Tang, Joseph C. Tsai and Hsing-Ying Zhong, “Video Falsifying by Motion Interpolation and Inpainting,” 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, Alaska, June 23-28, 2008, pp. 1-8 [11] Timothy K. Shih, Nick C. Tang, and Jenq-Neng Hwang, "Ghost Shadow Removal in Multi-Layered Video Inpainting," in proceedings of the IEEE 2007 International Conference on Multimedia & Expo (ICME 2007), Beijing, China, July 2-5, 2007. pp. 1471 – 1474. [12] Timothy K. Shih, Nick C. Tang, Wei-Sung Yeh, Ta-Jen Chen, and Wonjun Lee, "Video Inpainting and Implant via Diversified Temporal Continuations," in Proceedings of the 14th annual ACM international conference on Multimedia (ACM MM 2006), Santa Barbara, California, USA, October 23-27, 2006, pp. 133-136. [13] Timothy K. Shih, Nick C. Tang, Wei-Sung Yeh, and Ta-Jen Chen, "Video Inpainting and Implant via Diversified Temporal Continuations (video demonstration)," in Proceedings of the 14th annual ACM international conference on Multimedia (ACM MM 2006), Santa Barbara, California, USA, October 23-27, 2006, pp. 965-966.
  4. 4. AUTOBIOGRAPHY My name is Nick, from Taipei. I am a Ph.D. with a computer science background. It was the research I have done and the courses I have taken that convinced me the technology of enabling machines to understand data is the key to realize future huge networks with increasingly numerous devices and data for beneficial applications. With my career plan to dedicate myself to making these applications practically facilitate our lives, I aim to pursue an R&D job until now. From my senior year in undergraduate studies, I immersed myself in to the world of image and processing until I got the master degree. This period of study not only accumulated my independent research experience, but also gave me the big picture of what potential impact do computer-aid applications have on human lives. In college period, I designed an indoor surveillance system and got funding from Ministry of Science and Technology. In master period, I joined Professor Timothy K. Shih’s MINE Lab in Tamkang University, and mainly participated in a project of developing human computer interaction technologies. To enable computer to understand the semantics hidden in human action, my topic was to detect joints of a human body and analyze theirs movement in a monitored scene. I proposed a robust algorithm showing that ordinary user can easily communicate with computer only through he/she’s action. After I got the master degree, I resolved myself to extend my knowledge to computer vision and video processing. During the video surveillance and human computer interaction project, I learnt the skill of dealing with continuous video data in time domain and studied many real-world applications. Besides, I was so interested in how the collected video data can be translated into meaningful, even semantic information. Therefore, the invitation from Prof. Shih and my curiosity in video processing motivated me to pursue a Ph. D. degree. In Ph. D. period, I was a team leader of the multimedia team in our lab. My research topic was about digital video inpainting and editing. The major goal of this topic is to repair damaged content or remove undesired objects contained in a digital video. The related results were accepted by IEEE Transactions on Circuit System and Video Technology and IEEE Transactions on Systems Man and Cybernetics systems, Part C, which are the two flagship journals in IEEE Computer Science Society. Because of my outstanding research performance, I got the Ph.D. degree only spent three years. After graduating from TKU, because I really enjoyed the collaboration where novel ideas can be formed through brainstorming and teaching each other, I then joined the Multimedia Technology Lab directed by Dr. Hung-Yuan Mark Liao in Academia Sinica as a postdoctoral fellow until now. In the first two years, I first proposed an efficient algorithm for helping National Archives Administration to repair damaged aged films to preserve their historical value. Because of this, I received a contribution award from the administration of Taiwan e-Learning and Digital Archives Program. Then, I and my colleagues proposed several effective systems such as cross-camera people counting system, human action recognition system in last three years. All of above mentioned system were publish on IEEE transactions on Multimedia and Image Processing which are the two flagship journals in IEEE Engineering Society. In recent years, I not only improved my research skills but also learnt about how to collaborate with other people. Dr. Hung Yuan Mark Liao told me, “The good experience comes from good judgment; the good judgment comes from experience; the experience comes from bad judgment.” These sentences always motivates me to become a good researcher and the rewarding research experience earned in my postdoc period makes me eager to solve the biggest problem in the world. I am a reliable, motivated and friendly person. My work experience has given me the chance to train myself and given me skills that can easily be applied to different situations and tasks. I believe that I can be a productive member of your company and do work I can take pride in.
  5. 5. SELECTED RESEARCH TOPICS Visual knowledge transfer among multiple cameras for people counting The goal of people counting is to estimate the number of people or the density of crowds in a monitored environment. Both the long-term and short-term statistics of people counts of an environment provide useful information for strategy planning or event detection. However, detecting or estimating the density of crowds is always a challenging task due to some potential difficulties, such as partial occlusions, clutter backgrounds, and so on. To this end, we focus on the framework where multiple cameras with different angles of view are available, and consider the visual cues captured by each camera as a knowledge source, carrying out cross-camera knowledge transfer to alleviate the difficulties. Robust Action Recognition via Borrowing Information across Video Modalities The recent advances in imaging devices have opened the opportunity of better solving the tasks of video content analysis and understanding. Next-generation cameras, such as the depth or binocular cameras, capture diverse information, and complement the conventional 2D RGB cameras. Thus, investigating the yielded multi- modal videos generally facilitates the accomplishment of related applications. In this work, we provide an scenario to deal with the task of recognizing human actions with the aid of one additional RGBD camera for improving the accuracy of action recognition in RGB videos. Video Inpainting on Digitized Films via Maintaining Spatiotemporal Continuity Video inpainting is an important video enhancement technique used to facilitate the repair or editing of digital videos. It has been employed worldwide to transform cultural artifacts such as vintage videos/films into digital formats. However, the quality of such videos is usually very poor and often contain unstable luminance and damaged content. In this work, we propose a video inpainting algorithm for repairing damaged content in digitized vintage films, focusing on maintaining good spatiotemporal continuity and producing visually pleasing results.

×