Your SlideShare is downloading. ×
Design of embedded video capture system based on arm9
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Design of embedded video capture system based on arm9

411
views

Published on

for more projects visit @ www.nanocdac.com

for more projects visit @ www.nanocdac.com

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
411
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
25
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Design of Embedded Video Capture System Based onARM9Denan LiElectric power collegeInner Mongolia University of TechnologyHohhot Chinalidenan840603@163.comZhiyun XiaoElectric power collegeInner Mongolia University of TechnologyHohhot ChinaAbstract—With the rapid development of the computer, network,image processing and transmission technology, the application ofembedded technology in video monitoring is wider and wider.This design is based on S3C2440 hardware platform and Linuxoperation system, it uses mesh V2000 camera for video collection,combined with V4L video interface technology and MPEG-4video coding and decoding and video transmission technology,aiming at design a low-cost high-performance programe. Thisaticle elaborates the development process of ov511 USB cameradriving in Linux operation system and the MPEG-4 video codingtechnology and the network transmission realization of videodata.Key words—S3C2440 chip; embedded Linux; image capturing;MPEG-4 coding; RTP/RTCPI. INTRODUCTIONThe video monitoring system is a integrated system withstrong guard ability. Depending on its intuitive, convenient,rich information the video monitoring system widely appliedin many kinds of situations. In recent years, with the rapiddevelopment of computer, network and the imageryprocessing, lots of embedded video monitoring systememerged[1]. The scene monitoring has become the hot spot ofresearch. What this article designs is one kind of embeddedsystem which can gather image and the processing rapidly.According to the characteristic of the system, such as smallsize, low power consumption, quick speed and so on, wedesign one kind of embedded image acquisition system withstrong versatility.II. SYSTEM OVERALL DESIGNThis system adoptes the embedded system technologyMPEG-4 image coding technology as well as the networktransmission technology. The system uses the USB interfacecamera to capture video under the embedded linux systemplatform which based on S3C2440 micro controls chip.Transport the gathered data to the development board by USBinterface and carries on network transmission, then display itthrough PC after complet the MPEG-4 compressed encodingprocessing, thus realizes the video monitoring function. Videocapture system design as shown in Fig.1.Figure.1 Structure of video capture systemIII. OVERALL DESIGN OF HARDWARE PLATFORMThe S3C2440 that has been adopted in this system is a16/32 RISC(Reduced Instruction Set Computer) embeddedmicroprocessor based on ARM920T kernel. The chip includesfull performance MMU (Memory Management Unit), the128M flash memory which is used for solidifying operatingsystem, graphical user interface and imagery processingalgorithm and so on. 32M×2 SDRAM (Synchronous DynamicRandom Access Memory) is used to running the systemprogram and user program[2]. RS-232 is used to develop linuxhost machine and the ethernet is used to systems networktransmission. S3C2440 mainly oriented to the handholdequipment as well as the application of high cost performanceand low power loss. At present it’s generaly used in microcontrollers of the multimedia terminals. The image inputmodule of the system selects the webeye V2000 product, thelens use the CMOS(Complementary Metal OxideSemiconductor)photosensitive part, it has quick response, andlower cost compared to CCD(Charge-coupled Device), digitalsignal processing chip (DSP) was ov511, the Linux kernelusually brings the driver of this model of chips so as totransplant facilitatly. This gross structure diagram of thehardware platform as shown in Fig.2.Figure.2 Block diagram of hardware platformThis project is supported by the Science Research Foundation of InnerMongolia High Education (Grant No. NJ10074)978-1-4244-8039-5/11/$26.00 ©2011 IEEE
  • 2. IV. SYSTEM SOFTWARE DESIGNA. Transplantation of Linux Operating SystemThe transplant of Linux operating system is related withthe hardware. Its essence is making the essential modificationto the Linux operating system according to the concretehardware platform to make it running on this hardwareplatform very well. The system’s kernel edition is 2.6.12. TheLinux operating systems transplant needs to complete threeworks: bootloader transplant, Linux kernel transplant andfiling system transplant. Bootloader is running before theoperating system kernel, the mainly role is initializinghardware equipment(including I/O, the special functionregister), establishing the memory space map and bringing theenvironment of the systems hardware and software to anappropriate state. The Linux operating systems kernel canprovide good support to the ARM processor and manage mostof components which connect to the periphery of the processor.The transplanted Linux kernel only needs to provide supportto the hardware which will be used, therefore we may cut thekernel according to the practical application.[3]B. Implementation of USB Video Capture DriverVideo4Linux(V4L) is a video equipments kernel driverwhich is under embedded Linux. It provides a series ofinterface functions for the programming application of videoequipment under Linux[4]. The driver of USB interface cameraneeds to provide the realizations of basic I/O operationalfunctions, interrupting handling, memory mapping functionand ioctl functions of I/O channels, control interface, anddefine them in the struct file operations. Thus when theapplication programe carry on system calls such as open andclose operations, the Linux kernel will visit the function whichis provided by the driver through the struct file operation.Figure.3 Loading of USB driver module1) USB driver module to load: In order to drive the USBcamera in the system platform, first we compile the USBcontroller driver module into kernel dynamically to make theplatform support the USB interface. Then input compilecommand “make menuconfig” in the list of kernel which isunder Linux, choose “Video for Linux” which under“Multimedia device--->”, load the video4linux module. Asshown in Fig.3, press “Enter” to enter “usb support--->” andenter “<*>support for usb” which is under the list again, thenchoose “usb camera ov511 support”, thus in the kernel adddriver support for USB camera which adopt the ov511connection chip, save the configuration and exit. Make dep;make zImage; make modules then generate the driver whichinvolve ov511 under /driver/usb, at the same time the creatingzImage will be placed in the /tftpboot. Finally, start board withthe new kernel and load it successfully by insmod ov511.o.Write the kernel to flash by DNW after successfully testing it.Insert the camera, the system will prompt discovering theov511 camera, it indicates that the driver is loadedsuccessfully.2) The control word and the structure provided by V4L: Itprovides many data structure and control command in V4Lstandard, the programe controls the device by calling thefunction ioctl to finish the task of video collection. Majordevices control command in the function and dataconstruction’s application supported by Video4linux:a) VIDIOCGCAP: VIDIOCGCAP is used for obtainvideo device information, the gained information is stored indata construction video_capability.b) VIDIOCSWIN and VIDIOCGWIN: VIDIOCSWINand VIDIOCGWIN are used for set and inquiry the displaywindow information, these information is stored invideo_window.c) VIDIOCGCAPTURE and VIDIOCSCAPTURE:VIDIOCGCAPTURE and VIDIOCSCAPTURE are used forsub-zone gathering information inquiring and setting. Theseinformation is stored in video_capture.d) VIDIOCGPICT and VIDIOCSPICT: VIDIOCGPICTand VIDIOCSPICT are used for inquiring and setting imageattribute.e) VIDIOCMCAPTURE: VIDIOCMCAPTURE is usedfor obtain video image.f) VIDIOSYNC:VIDIOSYNC is used for judge whethervideo image intercepts successfully.g) video_channel: The video_channel is about varioussignal source attributes.h) video_window: The video_window containsinformation about capture area.i) video_mbuf: The video_mbuf is the frameinformation mapped by mmap.j) video_mmap: The video_mmap is used in mmap.3) Video capture process based on V4L: The flow chart ofvideo capture based on V4L as shown in Fig.4, thecorresponding device files of cameras in the system is/dev/video0, open video device with call function “vd->fd=open(“/dev/video0”, O_RDWR)”, and use the function“ioctl(vd->fd, VIDIOCGCAP, &(vd->capability))” to read thecorresponding information of camera in video_capability.Information is copied from the kernel space to membervariables of user program space vd->capability, use thefunction “ioctl(vd->fd, VIDIOCGPICT, &(vd->picture))” toread video_picture information in image buffer. After getvideo_picture information, if it is need to set the information,we can first assign a new value for each component, then call
  • 3. “ioctl(vd->fd, VIDIOCSPICT, &(vd->picture))” to reset theseinformation.After the initialization of the above equipment, we canuse V4L to video intercept. This system intercept video databy mean of memory mapping through mmap(), a period ofmemory should be mapped for buffer through mmap interface,the driver carry on image gathering through theVIDIOCMCAPTURE control word after mapping device filesinto the memory. The mmap() system call makes processesrealize sharing memory by mapping the same ordinary file.The same block of physical memory is mapped to differentprocesses address space. Different processes can see dataupdates in sharing memory between each other. Process canvisit files as common as memory access rather than callingfiles operation function, so the access speed of using mmap()memory mapping is much faster than standard file I/O whendealing with large files.First use the function “ioctl(vd->fd, VIDIOCGMBUF,&(vd->mbuf))” to obtain frame information which in camerastorage buffer, then modify video_mmap and current setting offrame state. Bind mmap with video_mbuf, thus map the devicefiles which is corresponded to the video equipment intomemory by using the function “vd->map=mmap(0, vd->mbuf.size,PROT_READ|PROT_WRITE,MAP_SHARED,vd->fd,0)”. Then use the function “ioctl(vd->fd,VIDIOCMCAPTURE,&(vd->mmap))” to finish a frame data intercepting. We canjudge whether the video current frame capture was finished byfunction “ioctl(vd->fd, VIDIOCSYNC, &frame)”. In order toobtain continuous frame video images, it is necessary to setthe number of vd->mmap.frames so as to determine cycletimes of capturing frame data. Use the sentence of“vd->map+vd->mbuf.offsets[vd->frame]” to obtain frameaddress, each captured frame data is stored in the style ofdocuments, then copy the captured datas to compressed bufferin MPEG-4, the capture will stop when the captured framenumber achieves the set value of vd->mmap.frames.Figure.4 Video capture processC. Video Data Compression and Network TransmissionVideo data must undergo the pretreatment before carryingon transmission on the ethernet due to network bandwidthlimits. This system uses a new generation of MPEG-4 whichbased on objected coding standards, it has the huge superiorityin interactive, anti-code nature and the highly effectivecompression[5].XVID is the newest MPEG-4 codec, and is also theMPEG-4 encoder which is published by GPL agreement andthe first real openning source[6]. In order to make theembedded system has MPEG-4 decoding function, we musttransplanted the software codec into embedded system first,then write specific application program according to functionit provided to realize video coding functions. Here we chooseversion of xvidcore1.1.2, the specific method of realization isas follows:1) Configurate the kernel: Configurate the kernel of XVIDthrough the following command: [root@localhostgeneric] #./configure --prefix=/home/123 CC=arm-Linux-gcc-host=arm-linuxThe configuration is mainly platform system, host andtarget system, the compiler type, some default file format, thenecessary header files.2) Compile the source code: Compile the source code,make some general source file generate target file, then packdynamic link storehouse according to these object files. Thedynamic link storehouse is some compiled code pieces, whenneeds its program ran, lib/ld.so will load it dynamicly. Thedynamic link storehouse saves the floppy disk and the memoryspace and the compiling time.3) Write application programe: Copy the storehousedocument libxvidcore.so which is produced by crosscompiling to libs subdirectory of cross compiler workingdirectory. This library files provides programming interfacefor other module of the system. Write our own applicationprograme with the actual needThe system transmits video data in real time by usingRTP(Real-time Transport Protocal)/RTCP(Real-time TransportControl Protocal) protocol made by IETF sound video workteam. Compressed encoding video data is processed by groupmodule of RTP, it packs the RTP package by adding themasthead and transmits by the transmission module. In orderto satisfy the latency and the drop package request of thenetwork video data live transmission, we need cooperate RTPand RTCP to providing data real-time transmission, whichcontain statistical information like the number of sendingpackage and lost package, the server can change thetransmission rate dynamically according to the statisticalinformation.Figure.5 Network transmission process
  • 4. Now there has been lots of open source libraries providethe realization of network protocol RTP/RTCP function[7]. Thejrtplib is an object-oriented RTP library which follows theRFC1889 design completely, it’s a RTP storehouse whichrealizes by the C language. In order to realize networktransmission of the video data by jrtplib, we must firstconfigurate and complie jrtplib source to generate the jrtpliblibrary of embedded environment. After producing the staticstorehouse, we can write application programe by callingstatic libraries of jrtplib to realize coding image transmission.Program is divided into the sender and receiver, the videocollection part of sender collects video information and sendsdata to encoder for MPEG-4 coding, then adds on the RTPmasthead before the coded VOP(Video Object Plane) data,realizes the network transmission by jrtplib networktransmission module. The receiver revices RTP video data,removes the RTP masthead and sends it to MPEG-4 decoderfor decoding again. The specific flow as shown in Fig.5.Figure.6 Effect of image acquisitionSystem uses the MPEG-4 compression algorithm so it hashigh compression ratio, the general bandwidth transmissionspeed is 8 frames/s in general bandwidth, and the videodisplayed by the system is smooth and stable. Video collectionsystem display effect as shown in Fig.6. In the follow-up work,if we use MPEG-4 video encoding chip, the system canachieve better video monitoring results.V. CONCLUSIONVideo collection system chooses S3C2440 32-bitembedded micro-controller chip with higher frequency andV2000 camera based on ov511, combines V4L video interfacetechnology and MPEG-4 video coding and decodingtechnology and streaming video transmission technology, itcan realized the rapid video acquisition and real-timetransmission well. This video collection system has stableperformance and lower cost. In the follow-up work, thissystem still needs improving in video capture and V4Lstandards, if necessary, it can add the image processingalgorithms. This article has certain research value in the videoimages application aspect.REFERENCES[1] Huiming Liu, & Rulin Wang. Development Philosophy and Practice ofthe Video monitoring system[J]. Intelligent Building. 2004, 3[2] Weihua Ma. Embedded Systems Principles and Applications[M].BeijingUniversity of Posts and Telecommunications Press, 2006[3] Tianze Sun, Wenju Yuan and so on. Embedded Design and Linux DriverDevelop Guidence[M]. Beijing:Electronic Industry Press, 2005.[4] Alan Cox. Video4Linux Programming [EB / OL]. http: / /kernelbook.Sourceforge.Net,2000[5] Yuzhuo Zhong, Qi Wang, Yuwen He, Object-based Multimedia DataCompression and Coading International Standard MPEG-4 andCalibration Model [M]. Beijing: Science Press, 2000[6] XVID core API overview. http://www.xvid.org[7] Jori Liesenborgs.JRTPLIB 3.5.2.March 26, 2006