The document describes an EPICS, MATLAB, and GigE CCD camera-based beam imaging system used for the IAC-RadiaBeam THz project. The system allows for real-time beam observation, tuning of THz radiation production, and measurement of transverse beam emittance. It consists of a YAG screen, Prosilica GC1290 GigE CCD camera, LED illuminator, and optics components. Beam images are acquired in real-time using SampleViewer software and remotely using EPICS and MATLAB. Beam images are analyzed in MATLAB to measure beam size and transverse emittance.
Image Fusion of Video Images and Geo-localization for UAV ApplicationsIDES Editor
We present in this paper a very fine method for
determining the location of a ground based target when viewed
from an Unmanned Aerial Vehicle (UAV). By determining the
pixel coordinates on the video frame and by using a range
finder the target’s geo-location is determined in the North-
East-Down (NED) frame. The contribution of this method is
that the target can be localized to within 9m when view from
an altitude of 2500m and down to 1m from an altitude of 100m.
This method offers a highly versatile tracking and geolocalisation
technique that has very good number of
advantages over the previously suggested methods. Some of
the key factors that differentiate our method from its
predecessors are:
1) Day and night time operation
2) All weather operation
3) Highly accurate positioning of target in terms of
latitude-longitude (GPS) and altitude.
4) Automatic gimbaled operation of the camera once
target is locked
5) Tracking is possible even when the target stops
moving
6) Independent of target (moving or stationary)
7) No terrain database is required
8) Instantaneous target geolocalisation is possible
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
Image Fusion of Video Images and Geo-localization for UAV ApplicationsIDES Editor
We present in this paper a very fine method for
determining the location of a ground based target when viewed
from an Unmanned Aerial Vehicle (UAV). By determining the
pixel coordinates on the video frame and by using a range
finder the target’s geo-location is determined in the North-
East-Down (NED) frame. The contribution of this method is
that the target can be localized to within 9m when view from
an altitude of 2500m and down to 1m from an altitude of 100m.
This method offers a highly versatile tracking and geolocalisation
technique that has very good number of
advantages over the previously suggested methods. Some of
the key factors that differentiate our method from its
predecessors are:
1) Day and night time operation
2) All weather operation
3) Highly accurate positioning of target in terms of
latitude-longitude (GPS) and altitude.
4) Automatic gimbaled operation of the camera once
target is locked
5) Tracking is possible even when the target stops
moving
6) Independent of target (moving or stationary)
7) No terrain database is required
8) Instantaneous target geolocalisation is possible
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
Active Infrared Night Vision System of Agricultural VehiclesNooria Sukmaningtyas
Active infrared night vision system was significant for night driving and it has been greatly used on
limousine car. Design active infrared night vision system for agricultural vehicles greatly improved the night
vision of them and it was an inevitable trend. Comparing parameters of various night vision systems and
designing active infrared night vision system of agricultural vehicles was significant for improving active
security of agricultural vehicles working at nighttime. By analyzing the infrared night vision system basic
parameters determined the structure form and basic parameters, calculated the infrared light wave width
and emission power to choose each components, designed active infrared night vision system’s structure
and determined parameters of agricultural vehicles.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATIONEditor IJCTER
This paper manages a few unique commitments to a programmed target acknowledgment (ATR) framework, which is connected to submerged mine grouping. The commitments focus on highlight determination and object arrangement. Initial, an advanced channel technique is intended for the component choice. Second, in the progression of article arrangement, a group learning plan in the structure of the Dempster–Shafer hypothesis is acquainted with wire the outcomes acquired by various classifiers. This combination can enhance the arrangement execution. We propose a sensible development of the essential conviction task
Implementation of Implantation-Stagger Measuring Unit using Image ProcessingDr. Amarjeet Singh
The electrical traction system of railways is a combination of physical upright structures and OCL(Overhead Contact Lines). The horizontal distance from the center of the track to the OHE mast called implantation, horizontal displacement of overhead contact wire with respect to the center of the railway track called stagger, and the perpendicular height of overhead contact wire from the ground are periodically checked by a lineman in order to ensure a safe distance from the railway track. In this paper, we have put forth an idea of building a distance measuring device to measure the implantation and stagger without touching the objects using Open CV on raspberry pi with a camera module which will be placed at the center of the track. The system will be having two features. To measure the distance of the nearest poles, the camera has to be placed facing the mast perpendicular to a circle of diameter appropriate which is placed on the pole for measurement purposes. And to measure the stagger, the camera has to be placed facing the overhead wire from the center of the track.
Active Infrared Night Vision System of Agricultural VehiclesNooria Sukmaningtyas
Active infrared night vision system was significant for night driving and it has been greatly used on
limousine car. Design active infrared night vision system for agricultural vehicles greatly improved the night
vision of them and it was an inevitable trend. Comparing parameters of various night vision systems and
designing active infrared night vision system of agricultural vehicles was significant for improving active
security of agricultural vehicles working at nighttime. By analyzing the infrared night vision system basic
parameters determined the structure form and basic parameters, calculated the infrared light wave width
and emission power to choose each components, designed active infrared night vision system’s structure
and determined parameters of agricultural vehicles.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Fast Single-Pixel Laser Imager for VR/AR Headset TrackingPing Hsu
In this work we demonstrate a highly flexible laser imaging system for 3D sensing applications such as in tracking of VR/AR headsets, hands and gestures. The system uses a MEMS mirror scan module to transmit low power laser pulses over programmable areas within a field of view and uses a single photodiode to measure the reflected light...
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATIONEditor IJCTER
This paper manages a few unique commitments to a programmed target acknowledgment (ATR) framework, which is connected to submerged mine grouping. The commitments focus on highlight determination and object arrangement. Initial, an advanced channel technique is intended for the component choice. Second, in the progression of article arrangement, a group learning plan in the structure of the Dempster–Shafer hypothesis is acquainted with wire the outcomes acquired by various classifiers. This combination can enhance the arrangement execution. We propose a sensible development of the essential conviction task
Implementation of Implantation-Stagger Measuring Unit using Image ProcessingDr. Amarjeet Singh
The electrical traction system of railways is a combination of physical upright structures and OCL(Overhead Contact Lines). The horizontal distance from the center of the track to the OHE mast called implantation, horizontal displacement of overhead contact wire with respect to the center of the railway track called stagger, and the perpendicular height of overhead contact wire from the ground are periodically checked by a lineman in order to ensure a safe distance from the railway track. In this paper, we have put forth an idea of building a distance measuring device to measure the implantation and stagger without touching the objects using Open CV on raspberry pi with a camera module which will be placed at the center of the track. The system will be having two features. To measure the distance of the nearest poles, the camera has to be placed facing the mast perpendicular to a circle of diameter appropriate which is placed on the pole for measurement purposes. And to measure the stagger, the camera has to be placed facing the overhead wire from the center of the track.
O Planejamento Estratégico no Estado de São PauloCogepp CEPAM
Encontro com Prefeitos (http://bit.ly/encontro-com-prefeitos)
Painel 1: Infraestrutura e Desenvolvimento Sustentável
O Planejamento Estratégico no Estado de São Paulo
Secretaria de Planejamento e Desenvolvimento Regional
Julio Francisco Semeghini Neto
Bouwkundige Joyce Bongers is getraind om ‘lekkende energie’ in onze huizen op te sporen. Dak, ramen, vloer, muren, elektrische apparaten en kruipruimte worden door haar met arendsogen gescand. Van het predicaat ‘oké’ tot ‘kan beter’, loopt Bongers alles in huis na.Wie slim is, kan met deze adviezen honderden euro’s besparen op de energierekening.
Infiflyer comes from the word infinity and flyers, as the name itself describes us, we are here to provide solutions for organizations who are looking for the right contacts to establish themselves on a larger scale.
Want to become more technologically efficient student? There’s an app (actually several) for that! Here are a few of our favorite *free* mobile apps that may assist you with managing your course studies and time.
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
In today’s technological life, everyone is quite familiar with the importance of security
measures in our lives. So in this regard, many attempts have been made by researchers and one
of them is flying robots technology. One well-known usage of flying robot, perhaps, is its
capability in security and care measurements which made this device extremely practical, not
only for its unmanned movement, but also for the unique manoeuvre during flight over the
arbitrary areas. In this research, the automatic landing of a flying robot is discussed. The
system is based on the frequent interruptions that is sent from main microcontroller to camera
module in order to take images; these images have been distinguished by image processing
system based on edge detection, after analysing the image the system can tell whether or not to
land on the ground. This method shows better performance in terms of precision as well as
experimentally.
A VLSI Architecture Realisation of an Wireless Endoscopy Systemijesajournal
The main objective is to design a VLSI architecture to realize an wireless endoscopy system. This system is
used in medicinal field by recording images of the digestive system. It is developed to transfer the image data
using the RF transmission so as to avoid the pain and irritation to the digestive tract which can be caused by
the cables when using conventional endoscopes. The proposed system consists of a RF transceiver and a
CMOS color image sensor. Capturing of images is done by image color sensor. RF transceiver is used to
send the images wirelessly. CMOS color image sensor is interfaced with FPGA. Real time images captured
by the color image sensor are compressed and they are sent wirelessly by the RF transceiver. Image
compression is used since it is the best way to save the power in transmission and reception and to decrease
the bandwidth in communication. So, after capturing the image using the CMOS color image sensor, the
image is compressed. Compression is done by JPEG standard. After high-quality lossy compression, the
image is transmitted by the wireless RF transceiver. Wireless transceiver is chosen in such a way that it is
operated in low power.
Device to evaluate cleanliness of fiber optic connectors using image processi...IJECEIAES
This work proposes a portable, handheld electronic device, which measures the cleanliness in fiber optic connectors via digital image processing and artificial neural networks. Its purpose is to reduce the evaluation subjectivity in visual inspection done by human experts. Although devices with this purpose already exist, they tend to be cost-prohibitive and do not take advantage of neither image processing nor artificial intelligence to improve their results. The device consists of an optical microscope for fiber optic connector analysis, a digital camera adapter, a reduced-board computer, an image processing algorithm, a neural network algorithm and an LCD screen for equipment operation and results visualization. The image processing algorithm applies grayscale histogram equalization, Gaussian filtering, Canny filtering, Hough transform, region of interest segmentation and obtaining radiometric descriptors as inputs to the neural network. Validation consisted of comparing the results by the proposed device with those obtained by agreeing human experts via visual inspection. Results yield an average Cohen's Kappa of 0.926, which implies a very satisfactory performance by the proposed device.
An image sensor or imaging sensor is a device that converts an optical image to an electric signal. It is used mostly in digital cameras and other imaging devices. This paper presents a high speed simulation methodology to reduce the long simulation time problem of traditional CMOS image sensor. A method based on spice model in cadence design platform is proposed to reduce the simulation time. This results simulation time reduced from 16ms to 0.225microsecond.
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
How conventional vision is more appropriate for control since it provides also error analysis. There is a lot of information in the images that is lost when converting to 3D
1. EPICS, MATLAB, GigE CCD CAMERA BASED BEAM IMAGING
SYSTEM FOR THE IAC-RadiaBeam THz PROJECT
Chris Eckman∗
, Y. Kim, A. Andrews, T. Downer, and P. Buaphad
Idaho State University, Pocatello, ID 83209, USA
Abstract
At the Idaho Accelerator Center (IAC) of Idaho State
University (ISU), we have been operating an L-band RF
linear accelerator running at low energies (4 - 44 MeV) for
the IAC-RadiaBeam THz project [1]. We have designed
and implemented a beam image acquisition and analysis
system that can be used for real time observation of the
electron beam, tuning of THz radiation production, and
measurement of transverse beam emittance. In this paper,
we describe the components of the imaging system, the ac-
quisition of real time beam images, the remote acquisition
of single shot beam images, and the processing of the beam
images in MATLAB to measure beam size and transverse
beam emittance.
INTRODUCTION
Due to a 1.2 mm wide slit at the entrance of the THz
radiator, experiments for the IAC-RadiaBeam THz project
require the implementation of an optics system to control
the beam shape for obtaining optimal beam transmission.
To accomplish this, the ISU Advanced Accelerator and Ul-
trafast beam Lab (AAUL) has developed an advanced beam
imaging system that acquires real time beam images as
well as single shot beam images. Real time beam imag-
ing is achieved through use of the software application,
SampleViewer in conjunction with the beam imaging sys-
tem. Single shot beam images can be acquired remotely
using Experimental Physics and Industrial Control System
(EPICS), MATLAB, and the EPICS modules: areaDetector
and MATLAB Channel Access (MCA). The beam imaging
system, taking single shot beam images, allows for measur-
ing beam size as well as transverse beam emittance.
THE IMAGING SYSTEM
The components of the beam imaging system are as fol-
lows: an Yttrium Aluminium Garnet (YAG) screen on an
actuator, a Prosilica GC1290 GigE CCD camera with a
mounted adjustable Fujinon HF50SA-1 lens, a screen LED
illuminator, and optics table to mount the optics cage. The
optics cage is assembled with components, manufactured
by Thorlabs [2], listed in Table 1. The essentials compo-
nents of the beam imaging system can be seen in Fig. 1.
The light generated from the interaction of the beam on the
∗ cryptoscientia@gmail.com
YAG screen can be observed through a crystal window sit-
uated adjacent to the YAG screen.
Figure 1: Set-up of the beam imaging system.
A small brightness-adjustable LED was mounted on the
end of the optics cage to illuminate the YAG screen and
ease in the alignment and calibration of the camera. Light
picked up by the camera from sources anywhere other than
the YAG screen increases the background noise on each
image, thus a black covering was draped around the optics
cage to shield the camera from all other light. The proxim-
ity of the camera to the accelerator poses risk of radiation
damage to the camera sensor, thus the camera was placed
inside a lead tube, shielding it from the radiation. For extra
protection, the optics cage was practically surrounded by
lead bricks.
PROSILICA CAMERA
Beam parameter measurements rely heavily on image
quality. The quality of the image is affected by the dis-
tance between the camera and the YAG screen, electronic
interference, broken pixels in the camera sensor, and ex-
poser time. The Prosilica GC1290 GigE CCD camera was
chosen for the beam imaging system for its high resolution
2. Table 1: Main components purchased through Thorlabs for
the imaging cage set-up
Image Description Part Number QTY
Threaded cage plate LCP01T 4
Cage assembly rod ER18 4
Steel optical post TR4 2
Post holder PH4 2
Mounting base BA2T2 2
and image quality as summerised in Table 2. The Fujinon
HF50SA-1 lens is focused on the YAG screen for optimal
viewing and imaging, the camera and lens as shown in the
bottom right of Fig. 2. The Fujinon lens allows for efficient
focusing and adjusting of the equipment.
Table 2: Main specifications on the Prosilica GigE CCD
Camera and the Fujinon HF50SA-1 lens
Parameter Value or Type
Interface GigE
Resolution 1280 × 960
Sensor size Type 1/3
Cell size 3.75 µm
Max frame rate at full resolution 32 fps
Lens focal length 50 mm
Lens minimum field of view 38 mm × 28 mm
The Prosilica camera has a trigger input port that re-
ceives an external trigger signal from the timing system of
the accelerator. The external trigger signal was added to
the power cable and fed into the trigger input port of the
camera. The power cable for the camera had to be physi-
cally modified to add the triggering signal. Triggering the
camera off of the timing system of the accelerator allows
the synchronization of the camera and the beam so that the
images taken are accurate depictions of the actual beam.
The CCD camera features a Gigabit Ethernet interface used
for transmitting beam images on the local area network to
a dedicated Linux computer system running EPICS where
the data is stored and processed. The EPICS host allows for
remote control of the camera via the EPICS module areaD-
etector.
SAMPLEVIEWER
SampleViewer is a free software application for the GigE
CCD camera distributed by Prosilica. It has a user friendly
interface that incorporates the cameras internet connection
to run several diagnostics on the CCD camera as well as
acquire real time beam images as shown in Fig. 2. The di-
agnostics that can be performed include: checking image
quality, camera alignment, camera triggering, image gain,
and image exposure time. To put the CCD camera on the
network, the Ethernet adapter of the computer needed to
be configured to improve system performance when using
the GigE CCD camera [3]. The configurations included the
packet size (MTU of 8228), interrupt moderate rate, trans-
mit buffers, and receive buffers. The camera also uses a
companion program, IPConfig, to change the dynamically
allocated IP address of the camera to a static IP address
allowing it to be used in EPICS.
Figure 2: Contorl panels of SampleViewer, a beam image,
and the Prosilica GC1290 camera with Fujinon lens.
EPICS AND AREADETECTOR
To more precisely control the CCD camera and extract
data from the images, a programmable method was em-
ployed. EPICS provides a platform that allows for camera
control in an accelerator system [4]. EPICS also provides
a Graphic User Interface (GUI) for user interaction when
needed, such as the main areaDetector GUI as shown in
Fig. 3. Each button and input box on the GUI has an asso-
ciated Process Variable (PV). PVs are named pieces of data
associated with a machine, things like status, readback and
physical parameters. Images are taken by the CCD camera
using “Start” button in areaDetector GUI panel. The PV as-
sociated with that button is named “13PS1:cam1:Acquire”.
All PVs in areaDetector can be controlled in MATLAB
through MCA.
MATLAB CHANNEL ACCESS
MCA creates a channel from MATLAB to EPICS, this
link allows for communication between the two different
programs [5]. With MCA, we are able to send information
3. Figure 3: Main GUI screen of areaDetector.
in the form of PVs between EPICS and MATLAB by
means of the channel access, thus the accelerator can be
controlled with MATLAB directly. MCA is an extension
of EPICS that can be downloaded and installed in both
EPICS and MATLAB. When installed properly, code can
be entered into MATLAB to control equipment in EPICS.
The MCA code MCAOPEN, MCAGET, and MCAPUT
allow MATLAB to open a channel to EPICS, to read
information from EPICS, and to write information to PVs.
An example of the MCA and MATLAB code is as follows:
CameraAcquire=MCAOPEN(’13PS1:cam1:Acquire’);
AcquireData=MCAGET(CameraAcquire);
MCAPUT(CameraAcquire,1);
MCACLOSE(CameraAcquire);
MCAGET retrieves the status of the PV associated
with the camera shutter. If the CameraAcquire is 0, then
the camera shutter is closed. However if the CameraAc-
quire is 1, then the shutter is open, and the camera starts
taking images. MCAPUT writes values into the PV, such
as 1 or 0, and MCACLOSE closes the channel of the PVs.
These images are then transferred to the hard drive to be
saved in a designated file path, which is also a PV. These
MCA commands are used in MATLAB to control all
aspects of the camera as well as other equipment such as
power supplies [4]. Once an image is in MATLAB it can
be processed to remove background, plot projections, filter
the image, and extract data for parameter measurement.
Fig. 4 (left) shows a background subtracted and color
enhanced version of the image in Fig. 2. The image to
the right is used by an automatic emittance measurement
program running in MATLAB and shows the horizontal
and vertical projections on the image directly [6].
Figure 4: MATLAB processed beam images.
SUMMARY
The beam imaging system is useful for many accelera-
tor applications and beam parameter measurements. The
single shot beam images were taken by the Prosilica GigE
CCD camera using MATLAB in conjunction with areaD-
etector. The beam images were transferred from the cam-
era to MATLAB. In MATLAB the beam images were ana-
lyzed and the relevant data extracted. The combination of
MATLAB and EPICS allows for the control of accelerator
equipment, access to higher level programming, and auto-
matic beam parameter measurements, such as beam size
and transverse beam emittance. The beam imaging system
was used to optimize the production of THz radiation at the
IAC [7,8].
ACKNOWLEDGEMENTS
We would also like to give our sincere thanks to Dr.
Mark Rivers of APS for his advice and guidance on AreaD-
etector and MCA. In addition, we would like to thank Dr.
Henrik Loos of SLAC for his strong interest, advice and
encouragement in this project.
REFERENCES
[1] http://www.iac.isu.edu
[2] http://www.thorlabs.us/index.cfm
[3] http://www.alliedvisiontec.com/us/products/cameras/gigabit-
ethernet/prosilica-gc/gc1290.html
[4] A. Andrews et al.., in these proceedings.
[5] A. Terebilo et al.., in Proc. ICALEPCS2001, THAP030, San
Jose, CA, USA.
[6] C. Eckman et al.., in Proc. IPAC2012, TUPPC055,New Or-
leans,LA
[7] Y. Kim et al.., in these proceedings.
[8] A. V. Smirnov et al.., in these proceedings.