SlideShare a Scribd company logo
1 of 52
IMAGE STABILIZATION IN SMARTPHONES
A SEMINAR REPORT
Submitted by
Mr. AHMED SALMAN P.S
(20131573)
To The department of technical education Government of Kerala
in partial fulfillment of the requirements for the award of
Diploma in COMPUTER ENGINEERING
DEPARTMENT OF COMPUTER ENGINEERING
MA’DIN POLYTECHNIC COLLEGE
MELMURI.P.O, MALAPPURAM
JUNE 2022-2023
DEPARTMENT OF COMPUTER ENGINEERING
MA’DIN POLYTECHNIC COLLEGE MELMURI.P.O,
MALAPPURAM
CERTIFICATE
This is to certify that the SEMINAR REPORT entitled D3.JS Submitted
by AHMED SALMAN (20131573) to the Department of Technical Education
Government of Kerala towards partial fulfilment of the requirements for the award of
the Diploma in Computer Engineering is a confide record of the work carried out by
her under my supervision and guidance.
HEAD OF SECTION:
Mrs. ANJALI. C
LECTURE IN CHARGE:
Mrs. SILPA PRADEEPKUMAR
(Department seal)
IS IN SMARTPHONES
DECLARATION
I hereby declare that the Seminar Report entitled IMAGE STABILIZATION
IN SMARTPHONES which is being submitted to the MPTC Malappuram under
Technical Educational Department in Computer Engineering is a benefited report of
the work carried out by me. The material contains in this report as to been submitted to
any institute or university for the award of any degree.
.
Place: Malappuram
Date: 28-10-2022
DEPT OF COMPUTER ENGINEERING 1 MPTC, MALAPPURAM
IS IN SMARTPHONES
ACKNOWLEDGEMENT
I give all honor and praise to the GOD who gave me wisdom and enabled me to complete my
seminar on "IMAGE STABILISATION IN SMARTPHONES" successfully. Then I express
my sincere thanks to, Mr. ABDUL HAMEED C.P, Principal, MA’DIN polytechnic college
Malappuram, who permitted to make use of the college facilities such as the Software Labs,
Internet etc at maximum possible extent.
I take this opportunity to express my whole hearted thanks to Mrs. ANJALI. C, Head of the
Department, and Computer Engineering for her continuous support and the inspiration given
during the study and analysis of the seminar topic.
I am deeply indebted to Mrs. RINCY.P, Mrs. SILPA PRADEEPKUMAR, seminar
coordinators, for their support in my seminar. They helped me to overcome several
constraints during the development of this seminar.
Last, I express my humble gratitude and thanks to all my teachers and other faculty members
of the Department of Computer Engineering, for their sincere and friendly cooperation in
completing this seminar.
DEPT OF COMPUTER ENGINEERING 2 MPTC, MALAPPURAM
IS IN SMARTPHONES
ABSTRACT
Image Stabilization (IS) technology has been considered essential to delivering
improved image quality in professional cameras. More recently, as a result of
advancing technology, IS has become increasingly popular to handheld device makers
who want to propose high-end features for their products. For image stabilization to
significantly improve camera shutter speed and to offer precise suppression of camera
vibration. Today. Three methods, electronic image stabilization (EIS) and Optical
image stabilization (OIS), Artificial image stabilization (AIS) in smartphones. Whether
capturing still images or recording moving video, image stabilization will always be a
major factor in reproducing a near perfect digital. While media capturing devices such
as digital cameras, digital camcorders, mobile phones, and tablets have decreased in
physical size, their requirements for pixel count density and resolution quality have
increased drastically over the last decade and will continue to rise. The market shift to
compact mobile devices with high megapixel capturing ability has created a demand
for advanced stabilization techniques
DEPT OF COMPUTER ENGINEERING 3 MPTC, MALAPPURAM
IS IN SMARTPHONES
CONTENTS
CHAPTER TITLE PAGE NO
1 INTRODUCTION 5
2 HISTORY 6
3 IMAGE STABILIZATION TECHNIQUES 8
3.1 OPTICAL IMAGE STABILIZATION 8
3.2 ELECTRONIC IMAGE STABILIZATION 8
3.3 ARTIFICIAL IMAGE STABILIZATION 9
4 APPLICATION IN STILL PHOTOGRAPHY 10
5 IMAGE STABILIZATION PRINCIPLES 11
6 OIS (OPTICAL IMAGE STABILIZATION) 12
6.1 OIS: FEATURES AND BENEFITS 13
6.2 OIS TYPES 15
6.3 OIS BEHAVIOR 19
6.4 OIS SYSTEM CHARACTERISTICS 32
7 EIS (ELECTRONIC IMAGE STABILI ZATION) 35
7.1 WORKING OF EIS 37
7.3 WAVELET-EIS 42
8 AIS (ARTIFICIAL IMAGE STABILIZATION) 45
8.1 ADVANTAGE OF AIS OVER OTHER 46
8.2 AI STABILIZATION AT DIFFERENT LEVELS 46
8.3 AI STABILIZATION BENEFITS IN NIGHT 47
9 OIS VS EIS VS AIS 48
10 CONCLUSION 49
11 REFERENCES 50
DEPT OF COMPUTER ENGINEERING 4 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 1
INTRODUCTION
Image Stabilization (IS) technology has been considered essential to delivering
improved image quality in professional cameras. More recently, as a result of
advancing technology, IS has become increasingly popular to handheld device makers
who want to propose high-end features for their products. So, manufacturers like ST
have worked hard on its technologies and methods for image stabilization to
significantly improve camera shutter speed and to offer precise suppression of camera
vibration. Today, from the technologic point of view, Digital Image Stabilization
(DIS), Electronics Image Stabilization (EIS) and Optical Image Stabilization (0IS) are
the best understood and the easiest to integrate in digital still cameras and smartphones,
though they can produce different image- quality results: in fact, DIS and EIS require
large memory and computational resources on the hosting devices, while 01S acts
directly on the lens position itself and minimizes memory and computation demands on
from the host. As an electro-mechanical method, lens stabilization (optical unit) is the
most effective method for removing blurring effects from involuntary hand motion or
shaking of the camera. Whether capturing still images or recording moving video,
image stabilization will always be a major factor in reproducing a near perfect digital
replica. A lack thereof will result in image distortion through pixel blurring and the
creation of unwanted artifacts. While media capturing devices such as digital cameras,
digital camcorders, mobile phones, and tablets have decreased in physical size, their
requirements for pixel count density and resolution quality have increased drastically
over the last decade and will continue to rise. The market shift to compact mobile
devices with high megapixel capturing ability has created a demand for advanced
stabilization techniques. Two methods, electronic image stabilization (EIS) and optical
image stabilization (OIS), are the most common implementations.
DEPT OF COMPUTER ENGINEERING 5 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 2
HISTORY
Dr. Oshima invented a basic technology for image stabilization in 1983, and five years
later successfully commercialized the world’s first video camera to feature the
technology. It was in 1988 that my new invention was used to launch the first video
camera products featuring an IS function.
Vibrating gyroscopes had been rather unstable, but we improved their structure to the
point that we could mass produce them for the first time anywhere in the world. After
that, the invention immediately spread worldwide, and consequently, my invention is
now employed in all digital camera image stabilizers. Every time I see beginners taking
professional-level clear pictures or videos with no image blur, I feel really glad to have
brought such a helpful invention to the public. And, also much to my unexpected
delight, the vibrating gyro that we invented was also modified to be widely used for car
navigation systems and stable vehicle travel.
When I look back at this now, I never felt distressed even when encountering any of the
many technological hurdles, or when people didn’t understand my invention and
refused to use it in our products. The reason was that the feeling of excitement of doing
what no-one had ever done was much more powerful.
A gyroscope is capable of precisely measuring the slightly rotated angle of an object in
the air (e.g., an airplane) and is commonly used to determine the attitude of an airplane.
The micro-miniature version of a gyro is a vibrating gyro.
Camera-motion image-blur results from rotation,” I realized. A moment later, I made
the mental connection between this camera jitter and our vibrating gyroscopes; that is,
it occurred to me that image blur could be eliminated by measuring the rotation angle
of a camera with a vibrating gyro and correcting the image accordingly.” I immediately
started to research this idea. “Blur” was less of a problem at this time because a camera
body itself was still heavy, so I started with basic research; i.e. analysing exactly what
was considered as an annoying blur.
DEPT OF COMPUTER ENGINEERING 6 MPTC, MALAPPURAM
IS IN SMARTPHONES
After a year of efforts, I could not conclude that my invention would enable image
stabilization, so the development of a prototype also ended up being terminated.
Being unwilling to let it go, however, I continued to work on the project on my own at
night after work. Then one day at an exhibition, I happened to see a laser display (using
visible lasers that use the three primary colours of light (red-green-blue) to display
characters and graphics), and I noticed that a certain part of the display could be used
for image stabilization. Later, using the part of that laser display I had seen, I finally
finished my long-sought prototype for image stabilization. I remember the moment
when I first turned on the prototype camera with nervous excitement. Even with a
shake of the camera, the image did not blur at all. It was too good to be true! That was
the most wonderful moment in my life.
Nevertheless, obstacles to success remained. My colleagues responded by saying “Is it
really needed?” or “Will it be a big seller?” They were reluctant to acknowledge the
significance of image stabilization. So I loaded the prototype camera onto a helicopter
and took pictures of Osaka Castle from the air. The prototype enabled such clear
pictures to be taken that even the faces of the people in the castle tower could be
recognized. These new images were quite obviously different from those significantly
blurred images taken with other cameras. I was therefore finally given a green light to
develop IS-related products.
DEPT OF COMPUTER ENGINEERING 7 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 3
IMAGE STABILIZATION TECHNIQUES
There are two types of techniques
1. Optical image stabilization
2. Electronic image stabilization
3. Artificial image stabilization
3.1OPTICAL IMAGE STABILIZATION
An optical image stabilization system usually relies on gyroscopes or accelerometers to
detect and measure camera vibrations. The readings, typically limited to pan and tilt,
are then relayed to actuators that move a lens in the optical chain to compensate for the
camera motion. In some designs, the favoured solution is instead to move the image
sensor, for example using small linear motors. Either method is able to compensate the
shaking of camera and lens, so that light can strike the image sensor in the same
fashion as if the camera was not vibrating. Optical image stabilization is particularly
useful when using long focal lengths and works well also in low light conditions.
Optical image stabilization is used to reduce blurring associated with motion and/or
shaking of the camera during the time the image sensor is exposed to the capturing
environment. However, it does not prevent motion blur caused by movement of the
target subject or extreme movements of the camera itself, only the relatively small
shaking of the camera lens by the user — within a few optical degrees. This camera-
user movement can be characterized by its pan and tilt components, where the angular
movements are known as yaw and pitch, respectively. Camera roll cannot be
compensated since 'rolling' the lens doesn't change/ compensate for the roll motion, and
therefore does not have any effect on the image itself, relative to the image sensor.
3.2ELECTRONIC IMAGE STABILIZATION
EIS is a digital image compensation technique which uses complex algorithms to
compare frame contrast and pixel location for each changing frame.
DEPT OF COMPUTER ENGINEERING 8 MPTC, MALAPPURAM
IS IN SMARTPHONES
Pixels on the image border provide the buffer needed for motion compensation. An EIS
algorithm calculates the subtle differences between each frame and then the results are
used to interpolate new frames to reduce the sense of motion. Though the advantage
with this method is the ability to create inexpensive and compact solutions, the
resulting image quality will always be reduced due to image scaling and image signal
postprocessing artifacts and more power will be required for taking additional image
captures and for the resulting image processing .EIS systems also suffer when at full
electronic zoom (long field-of-view) and under low-light conditions. Electronic image
stabilization, also known as digital image stabilization, has primarily been developed
for video cameras. Electronic image stabilization relies on different algorithms for
modelling camera motion, which then are used to correct the images. Pixels outside the
border of the visible image are used as a buffer for motion and the information on these
pixels can then be used to shift the electronic image from frame to frame, enough to
counterbalance the motion and create a stream of stable video.
Although the technique is cost efficient, mainly because there is no need for moving
parts, it has one shortcoming which is its dependence on the input from the image
sensor. For instance, the system can have difficulties in distinguishing perceived
motion caused by an object passing quickly in front of the camera from physical
motion induced by vibrations.
3.3ARTIFICIAL IMAGE STABILIZATION
Nowadays smartphones are shipping with AI chips with extra processing power. They
are dedicated to adapting us and continuously improve for better results.AI stabilization
takes the benefit from AI chip. Remember this is an added processing advantage over
traditional stabilization (OIS and EIS) setup. At one place EIS works on hardcoded
logic with constant algorithm, AI stabilization will continuously perform minute
changes in algorithm and image processing. This will help in reduce in blurriness at
each zooming level. In another words it is combination EIS and OIS.
DEPT OF COMPUTER ENGINEERING 9 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 4
APPLICATION IN STILL PHOTOGRAPHY
In photography, image stabilization can facilitate shutter speeds 2 to 5.5 stops slower
(exposures 4 to 22+1
⁄2 times longer), and even slower effective speeds have been
reported.A rule of thumb to determine the slowest shutter speed possible for hand-
holding without noticeable blur due to camera shake is to take the reciprocal of the 35
mm equivalent focal length of the lens, also known as the “1/mm rule”. For example, at
a focal length of 125 mm on a 35 mm camera, vibration or camera shake could affect
sharpness if the shutter speed is slower than 1
⁄125 second. An image taken at 1
⁄125
second speed with an ordinary lens could be taken at 1
⁄15 or 1
⁄8 second with an IS-
equipped lens and produce almost the same qualityWhen calculating the effective focal
length, it is important to take into account the image format a camera uses. For
example, many digital SLR cameras use an image sensor that is 2
⁄3, 5
⁄8, or 1
⁄2 the size
of a 35 mm film frame. This means that the 35 mm frame is 1.5, 1.6, or 2 times the size
of the digital sensor. The latter values are referred to as the crop factor, field-of-view
crop factor, focal-length multiplier, or format factor. On a 2× crop factor camera, for
instance, a 50 mm lens produces the same field of view as a 100 mm lens used on a 35
mm film camera .However, image stabilization does not prevent motion blur caused by
the movement of the subject or by extreme movements of the camera. Image
stabilization is only designed for and capable of reducing blur that results from normal,
minute shaking of a lens due to hand-held shooting. Some lenses and camera bodies
include a secondary panning mode or a more aggressive 'active mode', both described
in greater detail below under optical image stabilization. Astrophotography makes
much use of long-exposure photography, which requires the camera to be fixed in
place. However, fastening it to the Earth is not enough, since the Earth rotates. The
Pentax K-5 and K-r, when equipped with the O-GPS1 GPS accessory for position data,
can use their sensor-shift capability to reduce the resulting star trails. Stabilization can
be applied in the lens, or in the camera body.
DEPT OF COMPUTER ENGINEERING 10 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 5
IMAGE STABILIZATION PRINCIPLES
Image stabilization is used to reduce blurring associated with motion and/or shaking of
the camera during the time the image sensor is exposed to the capturing environment.
However, it does not prevent motion blur caused by movement of the target subject or
extreme movements of the camera itself, only the relatively small shaking of the
camera lens by the user – within a few optical degrees. This camera-user movement
can be characterized by its pan and tilt components, where the angular movements are
known as yaw and pitch, respectively. Camera roll cannot be compensated since
'rolling' the lens doesn't actually change/compensate for the roll motion, and therefore
does not have any effect on the image itself, relative to the image sensor.EIS is a digital
image compensation technique which uses complex algorithms to compare frame
contrast and pixel location for each changing frame. Pixels on the image border provide
the buffer needed for motion compensation. An EIS algorithm calculates the subtle
differences between each frame and frames to reduce the sense of motion. Though the
advantage with this method is the ability to create inexpensive and compact solutions,
the resulting image quality will always be reduced due to image scaling and image
signal postprocessing artifacts, and more power will be required for taking additional
image captures and for the resulting image processing. EIS systems also suffer when at
full electronic zoom (long field-of-view) and under low-light conditions.
DEPT OF COMPUTER ENGINEERING 11 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 6
OIS (OPTICAL IMAGE STABILIZATION)
OIS is an image stabilization system that enhances mobile photography by physically
moving the lens module or image sensor to counteract any slight camera movement to
avoid blurry photos. OIS specifically adds a gyroscope to sense the change of the lens
posture and compensates the displacement caused by handshake by physically
controlling the slightness of the lens together. OIS does not crop the picture. An optical
image stabilizer (OIS) is a mechanism used in still or video cameras that stabilizes the
recorded image by varying the optical path to the sensor. This technology is
implemented in the lens itself, as distinct from in-body image stabilization (IBIS),
which operates by moving the sensor as the final element in the optical path. The key
element of all optical stabilization systems is that they stabilize the image projected on
the sensor before the sensor converts the image into digital information. IBIS can have
up to 5 axis of movement: X, Y, Roll, Yaw, and Pitch. IBIS has the added advantage of
working with all lenses. Early on Image Stabilization was an unusual feature in
traditional professional cameras and it became more common with the arrival of digital
still camera (DSC) on the market. These devices have taken over the digital imaging
system market and contributed to drive technological innovation in photography. Then
in 2010, the mobile revolution started and the use of smartphones exploded. The
technological evolution encouraged the major phone makers to propose innovative
solutions in hardware design that allowed the integration of miniaturized photographic
modules in mobile phones to take pictures with camera-like resolution. This revolution
has led to the decline of compact DSCs, deposed by camera-equipped phones. Image
Stabilization was a key feature, joining display dimension, Near Field Communication
(NFC), the Wireless Charging and Fingerprint security options, in contributing to the
success of smartphones and enabling the segmentation of high-end models to low-cost
ones. OIS systems reduce image blurring without significantly sacrificing image
quality, especially for low-light and long-range image capture.
DEPT OF COMPUTER ENGINEERING 12 MPTC, MALAPPURAM
IS IN SMARTPHONES
Mobile camera modules have followed the same trend after their introduction in
smartphones and handsets.
From a marketing perspective, according to an IC Insights report, sales of stand-alone
digital cameras are projected to decline at an annual average rate of -10.5% in the forecast
period 2012-2017, while revenues for cell phone-camera integrated circuits are expected to
rise by an annual rate of 9.0% in the same period. As a result of the introduction of these
and other innovations, the smartphone market has seen remarkable growth and it is
estimated that shipments will exceed 1 billion units in 2014 for the first time.
6.1 OIS: FEATURES AND BENEFITS
Optical image stabilization prolongs the shutter speed possible for handheld
photography by reducing the likelihood of blurring the image from shake during the
same exposure time. For handheld video recording, regardless of lighting conditions,
optical image stabilization compensates for minor shakes whose appearance magnifies
when watched on a large display such as a television set or computer monitor. The
DSC market has moved towards smaller sizes, lower weight and higher resolutions,
much as mobile camera modules have followed the same trend after their introduction
in smartphones and handsets. A big drawback to this development has been the impact
of blurring, caused by involuntary motions, on image quality. In fact, lighter cameras
produce greater blurring.
In addition, the introduction of larger LCD displays has encouraged users to take pictures
with outstretched arms, further increasing blurring. The introduction of Image Stabilization
in several mobile platforms has been significant added value for photography lovers and
especially for younger users, who replaced their traditional and bulky cameras with brand-
new smartphones or had cameras available to record memories simply because those
cameras were embedded in the mobile platform they were already carrying. Image
Stabilization in smartphones enables pictures and video with quality comparable to digital
still cameras in so many operating conditions. The request for Image Stabilization is
increasing both in compact DSCs and in smartphones.
DEPT OF COMPUTER ENGINEERING 13 MPTC, MALAPPURAM
IS IN SMARTPHONES
Picture blurring caused by hand jitter, a biological phenomenon occurring at a
frequency below 20Hz, is even more evident in higher resolution cameras. In fact, in
smaller resolution cameras the blurring may not exceeds one pixel, which is negligible;
but in higher resolution ones it may impact many pixels, thus degrading image quality
significantly.
Optical Image Stabilization technology is an effective solution for minimizing the effects
of involuntary camera shake or vibration. It senses the vibration on the hosting system and
compensates for these camera movements to reduce hand-jitter effects. So, OIS captures
sharp pictures at shutter speeds three, four, or five times slower than otherwise possible.
The increase of the shutter opening time permits more brilliant and clear pictures in indoor
or low-light conditions. The time during which the shutter remains open, regulates the
amount of light captured by the image sensor. Of course, the longer the exposure time, the
greater the potential for hand shaking to cause blurring. In the case of smartphones
cameras, because of their small lens apertures and the material used to make unbreakable
lenses, the amount of light that can enter and strike the image sensor is significantly less
than that of a DSC. This requires a higher exposure time, with the obvious drawback of
increasing the effect due to shaking hands.
Sample:
OIS OFF OIS ON in 5Mpixel camera
DEPT OF COMPUTER ENGINEERING 14 MPTC, MALAPPURAM
IS IN SMARTPHONES
Besides the optical requirements, two main challenges in the development of OIS in
smartphones are size and cost.
The additional hardware required to implement OIS on camera-module technology,
increases the total cost of camera, and increases the camera’s size. This runs counter to
the constant market demand for smaller and thinner devices.
6.2 OIS TYPES
Lens-based
In Nikon and Canon's implementation, it works by using a floating lens element that is
moved orthogonally to the optical axis of the lens using electromagnets. Vibration is
detected using two piezoelectric angular velocity sensors (often called gyroscopic
sensors), one to detect horizontal movement and the other to detect vertical movement.
As a result, this kind of image stabilizer corrects only for pitch and yaw axis rotations,
and cannot correct for rotation around the optical axis. Some lenses have a secondary
mode that counteracts vertical-only camera shake. This mode is useful when using a
panning technique. Some such lenses activate it automatically; others use a switch on
the lens.
To compensate for camera, shake in shooting video while walking, Panasonic
introduced Power Hybrid OIS+ with five-axis correction: axis rotation, horizontal
rotation, vertical rotation, and horizontal and vertical motion.
Some Nikon VR-enabled lenses offer an "active" mode for shooting from a moving
vehicle, such as a car or boat, which is supposed to correct for larger shakes than the
"normal" mode. However, active mode used for normal shooting can produce poorer
results than normal mode. This is because active mode is optimized for reducing higher
angular velocity movements (typically when shooting from a heavily moving platform
using faster shutter speeds), where normal mode tries to reduce lower angular velocity
movements over a larger amplitude and timeframe.
DEPT OF COMPUTER ENGINEERING 15 MPTC, MALAPPURAM
IS IN SMARTPHONES
Most manufacturers suggest that the IS feature of a lens be turned off when the lens is
mounted on a tripod as it can cause erratic results and is generally unnecessary. Many
modern image stabilization lenses (notably Canon's more recent IS lenses) are able to auto-
detect that they are tripod-mounted (as a result of extremely low vibration readings) and
disable IS automatically to prevent this and any consequent image quality reduction. The
system also draws battery power, so deactivating it when not needed extends the battery
charge. A disadvantage of lens-based image stabilization is cost. Each lens requires its own
image stabilization system. Also, not every lens is available in an image-stabilized version.
This is often the case for fast primes and wide-angle lenses. However, the fastest lens with
image stabilisation is the Nocticron with a speed of f/1.2. While the most obvious
advantage for image stabilization lies with longer focal lengths, even normal and wide-
angle lenses benefit from it in low-light applications.
Lens-based stabilization also has advantages over in-body stabilization. In low-light or
low-contrast situations, the autofocus system (which has no stabilized sensors) is able
to work more accurately when the image coming from the lens is already stabilized.
[citation needed] In cameras with optical viewfinders, the image seen by the
photographer through the stabilized lens (as opposed to in-body stabilization) reveals
more detail because of its stability, and it also makes correct framing easier. This is
especially the case with longer telephoto lenses. This advantage does not occur on
compact system cameras, because the sensor output to the screen or electronic
viewfinder would be stabilized.
Sensor-Shift
The sensor capturing the image can be moved in such a way as to counteract the motion
of the camera, a technology often referred to as mechanical image stabilization. When
the camera rotates, causing angular error, gyroscopes encode information to the
actuator that moves the sensor.
The sensor is moved to maintain the projection of the image onto the image plane,
which is a function of the focal length of the lens being used.
DEPT OF COMPUTER ENGINEERING 16 MPTC, MALAPPURAM
IS IN SMARTPHONES
Modern cameras can automatically acquire focal length information from modern
lenses made for that camera. Minolta and Konica Minolta used a technique called Anti-
Shake (AS) now marketed as Steady Shot (SS) in the Sony α line and Shake Reduction
(SR) in the Pentax K-series and Q series cameras, which relies on a very precise
angular rate sensor to detect camera motion.
Olympus introduced image stabilization with their E-510 D-SLR body, employing a
system built around their Supersonic Wave Drive. Other manufacturers use digital
signal processors (DSP) to analyse the image on the fly and then move the sensor
appropriately. Sensor shifting is also used in some cameras by Fujifilm, Samsung,
Casio Exilim and Ricoh Caplio.
The advantage with moving the image sensor, instead of the lens, is that the image can
be stabilized even on lenses made without stabilization. This may allow the
stabilization to work with many otherwise un-stabilized lenses, and reduces the weight
and complexity of the lenses. Further, when sensor-based image stabilization
technology improves, it requires replacing only the camera to take advantage of the
improvements, which is typically far less expensive than replacing all existing lenses if
relying on lens-based image stabilization. Some sensor-based image stabilization
implementations are capable of correcting camera roll rotation, a motion that is easily
excited by pressing the shutter button. No lens-based system can address this potential
source of image blur. A by-product of available "roll" compensation is that the camera
can automatically correct for tilted horizons in the optical domain, provided it is
equipped with an electronic spirit level, such as the Pentax K-7/K-5 cameras.
One of the primary disadvantages of moving the image sensor itself is that the image
projected to the viewfinder is not stabilized. However, this is not an issue on cameras
that use an electronic viewfinder (EVF).
Since the image projected on that viewfinder is taken from the image sensor itself.
Similarly, the image projected to a phase-detection autofocus system that is not part of
the image sensor, if used, is not stabilized.
DEPT OF COMPUTER ENGINEERING 17 MPTC, MALAPPURAM
IS IN SMARTPHONES
Some, but not all, camera-bodies capable of in-body stabilization can be pre-set
manually to a given focal length. Their stabilization system corrects as if that focal
length lens is attached, so the camera can stabilize older lenses, and lenses from other
makers. This isn't viable with zoom lenses, because their focal length is variable. Some
adapters communicate focal length information from the maker of one lens to the body
of another maker.
Some lenses that do not report their focal length can be retrofitted with a chip which
reports a pre-programmed focal length to the camera body. Sometimes, none of these
techniques work, and image-stabilization cannot be used with such lenses.
In-body image stabilization requires the lens to have a larger output image circle because
the sensor is moved during exposure and thus uses a larger part of the image. Compared to
lens movements in optical image stabilization systems the sensor movements are quite
large, so the effectiveness is limited by the maximum range of sensor movement, where a
typical modern optically stabilized lens has greater freedom. Both the speed and range of
the required sensor movement increase with the focal length of the lens being used, making
sensor-shift technology less suited for very long telephoto lenses, especially when using
slower shutter speeds, because the available motion range of the sensor quickly becomes
insufficient to cope with the increasing image displacement.
Dual
Starting with the Panasonic Lumix DMC-GX8, announced in July 2015, and
subsequently in the Panasonic Lumix DC-GH5, Panasonic, who formerly only
equipped lens-based stabilization in its interchangeable lens camera system (of the
Micro Four Thirds standard), introduced sensor-shift stabilization that works in concert
with the existing lens-based system ("Dual IS").
In the meantime (2016), Olympus also offered two lenses with image stabilization that
can be synchronized with the in-built image stabilization system of the image sensors
of Olympus' Micro Four Thirds cameras ("Sync IS"). With this technology a gain of 6.5
f-stops can be achieved without blurred images.
DEPT OF COMPUTER ENGINEERING 18 MPTC, MALAPPURAM
IS IN SMARTPHONES
This is limited by the rotational movement of the surface of the Earth, that fools the
accelerometers of the camera. Therefore, depending on the angle of view, the maximum
exposure time should not exceed 1⁄3 second for long telephoto shots (with a 35 mm
equivalent focal length of 800 millimetres) and a little more than ten seconds for wide
angle shots (with a 35 mm equivalent focal length of 24 millimetres), if the movement of
the Earth is not taken into consideration by the image stabilization process.
In 2015, the Sony E camera system also allowed combining image stabilization systems
of lenses and camera bodies, but without synchronizing the same degrees of freedom.
In this case, only the independent compensation degrees of the in-built image sensor
stabilization are activated to support lens stabilisation.
Canon and Nikon now have full-frame mirrorless bodies that have IBIS and also
support each company's lens-based stabilization. Canon's first two such bodies, the
EOS R and RP, do not have IBIS, but the feature was added for the more recent R5 and
R6. All of Nikon's full-frame Z-mount bodies—the Z 6, Z 7, and the Mark II versions
of both— have IBIS. However, its APS-C Z 50 lacks IBIS.
6.3 OIS BEHAVIOUR
0IS is a mechanical technique used in imaging devices to stabilize the recording image
by controlling the optical path to the image sensor. The two main methods of 0IS in
compact camera modules are implemented by either moving the position of the lens
(lens shift) or the module itself (module tilt). Camera movements by the user can cause
misalignment of the optical path between the focusing lens and centre of the image
sensor. In an 0IS system using the lens shift method, only the lens within the camera
module is controlled and used to realign the optical path to the centre of the image
sensor. In contrast, the module tilt method controls the movement of the entire module,
including the fixed lens and image sensor. module tilt allows for a greater range of
movement compensation by the 0IS system, with the largest trade off being increased
module height.
DEPT OF COMPUTER ENGINEERING 19 MPTC, MALAPPURAM
IS IN SMARTPHONES
Minimal image distortion is also achieved with module tilt due to the fixed focal length
between the lens and image sensor. Overall, in comparison to EIS, 0IS systems reduce
image blurring without significantly sacrificing image quality.
However, due to the addition of actuators and the need for power driving sources
compared to no additional hardware with EIS, 0IS modules tend to be larger and as a
result are more expensive to implement.
Lens Shift Method Module Tilt Method
Lens is moved Entire module is moved
OIS Module Components
An OIS system relies on a complete module of sensing, compensation, and control
components to accurately correct for unwanted camera movement. This movement or
vibration is characterized in the X/Y-plane, with yaw/ pan and pitch/tilt movements
detected by different types of isolated sensors. The lens shift method uses Hall sensors
for lens movement detection while the module tilt method uses photo reflectors to
detect module movement.
Both methods require a gyroscope in order to detect human movement. OIS controllers
use gyroscope data within a lens target positioning circuit to predict where the lens
needs to return in order to compensate for the user's natural movement.
DEPT OF COMPUTER ENGINEERING 20 MPTC, MALAPPURAM
IS IN SMARTPHONES
With lens shift, Hall sensors are used to detect real-time X/Y locations of the lens after
taking into consideration actuator mechanical variances and the influence of gravity.
The controller uses separate internal servo system that combines the lens positioning
data of the Hall sensors with the target lens position calculation from the gyroscope to
calculate the exact driving power needed for the actuator to reposition the lens. With
module tilt, the process is similar, but the module’s location is measured and
repositioned instead of just the lens. With both methods, the new lens position realigns
the optical path to the centre of the image sensor.
OIS: the working principle and the specification
In contrast to DIS, OIS doesn’t require post-processing algorithms on the captured
frames. OIS controls the optical path between the target and the image sensor by
moving mechanical parts of the camera itself: so, even if the camera shakes, the OIS
ensures that light arriving to the image sensor does not change trajectory, since we can
assume any pixel colour-value is the composition of a single cone of light. This is often
the case for fast primes and wide-angle lenses. However, the fastest lens with image
stabilisation is the Nocticron with a speed of f/1.2.
Lens-based stabilization also has advantages over in-body stabilization. nly the
independent compensation degrees of the in-built image sensor stabilization are
activated to support lens stabilisation.
The basic principle underlying OIS is simplified where the movement effects are
amplified and represented on a single axis, for the sake of clarity. Let’s suppose we
take a picture of a non-moving object in which the shutter remains open for a time
interval equal to ∆t; if no compensation occurs, the involuntary rotation of the camera
generates a distribution of the light cone, over a single pixel, splattered on a segment
indicated in by. Clearly, this phenomenon occurs across the whole image sensor,
causing a blurred image. Otherwise, when optical stabilization occurs, the lens moves
opposite to the direction of the camera shake and the image results to be stabilized (i.e.,
the subject acquired in t1 coincides with image acquired in t0).
DEPT OF COMPUTER ENGINEERING 21 MPTC, MALAPPURAM
IS IN SMARTPHONES
OIS compensation
As stated above, the hand movements indicated were simplified to explain the
compensating effect of the lens movements on the picture. In reality, hand tremors
affect two axis (below figure), where the light cone generated by a single white LED is
distributed in the two-dimensional space of the image sensor .
Pictures captured at different shutter speeds (respectively at 1/8th, 1/4th, 1 and 2
seconds) without image stabilization. This clearly demonstrates that the interval time
∆t= t1- t0 captures more than a single rotation. In fact, the curves are the convolution of
the LED light distribution.
DEPT OF COMPUTER ENGINEERING 22 MPTC, MALAPPURAM
IS IN SMARTPHONES
Comparing these curves with segment A-B in Figure 5a makes clear hand tremor is not
a predictive effect.
Finally, we need to make clear that OIS can stabilize image blur due to photographer hand
trembling, but it cannot compensate for blur caused by scene motion (below figure).
Blur due to scene motion
Physiological tremor define the OIS specification
While the tremor is not a pathology, it is a common physiological phenomenon present
in all humans. It’s an involuntary oscillatory movement of body parts directly
generated by muscles during their activities when they contract and relax repetitively.
From its nature, the physiological tremor is not so clearly visible to the naked eye and
is independent of age though it may depend on the capability of the body muscles to
maintain a position against the force of gravity. For example, standing up and holding a
camera with outstretched arms definitely produces physiological tremor in arm
muscles. The consequences of this phenomenon are visible as the blurring effect in
pictures: this effect is what OIS aims to reduce.
Also, for the tremor, as for many physical phenomena, statistical modelling has played
an important role in understanding and identification of its characteristics.
DEPT OF COMPUTER ENGINEERING 23 MPTC, MALAPPURAM
IS IN SMARTPHONES
An acquisition campaign has been conducted, over a representative population, to
measure the handshake, identifying the spectrum characteristics and defining OIS
specifications in amplitude and frequency.
As results of the campaign, vibration has been identified as an oscillating signal with:
• An amplitude typically less than 0.5 degrees.
• A frequency compatible with the vital signs of humans, with a spectrum in the
range 0-20 Hz.
• The relevant results of the identification test on the tremor as a signal are
graphically shown in
angular rates measured on X and Y axis
As described above, the technology used to build the camera module is a fundamental
factor for OIS implementation. They increase the complexity of the camera modules,
which introduces new fixed and moving items, increases its dimensions and cost. OIS
contains actuators that are able to move the lens, and sensors, that allow following the
position.
The technology characterizes OIS-enabled modules both for the driving method and for
position data sensors, greatly influencing the performance of the OIS system. The
actuator for mobile phone cameras may be built in different technologies as adaptive
liquid lens (LL), shape memory alloy (SMA), or a piezo-electric motor. Today, the
most widespread actuators are based on the Voice Coil Motor (VCM).
DEPT OF COMPUTER ENGINEERING 24 MPTC, MALAPPURAM
IS IN SMARTPHONES
Voice coil actuation exploits the interaction between a current-carrying coil winding
and the field generated by a permanent magnet; the coil and the magnet, one in front of
the other, are attached to two sides of the camera-module housing can.
When a current is applied to the coil the interaction between the fixed and electrically
generated magnetic fields by the coil generates a force that enables the camera body to
move by a distance directly proportional to the current applied.
Actually, in smartphones camera modules, the lens isn’t always the moving part.
Depending on the architecture used to build the camera modules, there are two main
methods for OIS compensation taking its name from the mechanical structures:
• Barrel Shift (also called Lens Shift) where the image sensor is fixed to the
bottom of the camera case and the lenses move with a translational movement.
• Camera Tilt where the image sensor is integrated in the same body with the
lenses, and both move angularly to compensate for involuntary shaking.
Another important constructive aspect of the camera module is represented by the
position sensors, which are fundamental to detecting the lens movements. These
sensors can be placed inside the module in either of two approaches to retrieve the
position information:
•using Hall sensors, an approach mainly suitable for Barrel-Shift architecture.
•using photo sensors, appropriate for the Camera-Tilt architecture.
As previously stated, the dimensions of an OIS camera module depends on its structure
and its technology and may be slightly more bulky when compared with those of fixed-
lens camera modules. On the other hand, the integration in smartphones or handsets
forces designers to reduce the size of the entire OIS system (camera module +
electronics).
Which should cover an area approximately equal size to 100mm2. A common trick
usually used for integrating the OIS system and saving space in smartphone platforms.
DEPT OF COMPUTER ENGINEERING 25 MPTC, MALAPPURAM
IS IN SMARTPHONES
It is to place both the camera module and the OIS circuitry on the same flexible PCB,
then to fold it to arrange the devices alongside the camera module.
Another important consideration related to the placement of the OIS system in the
mobile is the identification of the reference system that defines the orientation of
movements of the entire platform: this reference is determined by the gyroscope.
The gyroscope is suited to measure the hand jitter better than the accelerometer because
the blur effect caused by linear translations of camera is negligible in comparison with
that caused by the component of angular rotation: this means that human jitter can best
be measured by observing angular displacements. The gyro measures angular rates
along its reference axes; these angular movements along axes are known as pitch, yaw
and roll. roll refers to the rotations around the longitudinal axis (Z), pitch to rotational
movements about the lateral axis (X), and yaw to rotations around the vertical axis (Y).
If the gyroscope is integral to the camera module, it detects and measures the same
movements of the camera.
The module architectures described so far are capable of correcting the effects of
camera shaking in two dimensions, pitch and yaw: in these two presented topologies
it’s not possible to compensate for any roll motion because no translation or rotation of
the lens along the axis perpendicular to the image sensor can contribute to correct it. An
action able to compensate for roll can be made by the image sensor itself only if it is
able to move freely inside the camera module. Although the camera-tilt architecture
guarantees higher performance, its construction complexity has limited its adoption in
commercial camera modules. Therefore, from this point, we will refer to a generic
VCM-based Barrel-Shift architecture, built using Hall Sensors to manage the Lens-
Shift compensation.
6.5 OIS ARCHITECTURE DESCRIPTION
The electronic circuitry implementing the optical image stabilization, partially
described in the previous Section 5, is composed by four main components:
− A Gyroscope, able to sense the movements or the vibrations inflicted on the system;
DEPT OF COMPUTER ENGINEERING 26 MPTC, MALAPPURAM
IS IN SMARTPHONES
− Hall Sensors, able to sense the lens movements, from within the camera module
(as depicted in Figure 10a).
− A Driver that performs two functions: it pilots the camera module into the right
position, as calculated by a control algorithm, while, on the other hand, it retrieves
the information on the camera-module position from the Hall sensors mounted
inside it;
− A Microcontroller that executes the control algorithm to correct for camera
displacements.
This architecture has a notable benefit: it permits a system to operate independent of
the hosting mobile platform, which autonomously compensates the camera module,
thus performing OIS.
OIS Control-Loop block diagram, where “θyaw” and “θpitch” respectively indicate yaw and pitch angles.
According to the Control-Loop block diagram (Figure 14), the angular rates detected by
the gyroscope along two main axes (pitch and yaw) are integrated to achieve the
relative angular displacements. The current position of the camera module is achieved
through its Hall sensors, since these are compared to retrieve the error angle which is
elaborated by the control algorithm of the on-board microcontroller to set the VCM
actuators’ new positions. So the driver moves the camera in order to compensate for the
involuntary jitter.
DEPT OF COMPUTER ENGINEERING 27 MPTC, MALAPPURAM
IS IN SMARTPHONES
The control algorithm should also manage the pre-processing of the acquired signals
and the Hall sensors calibration compensate for the temperature drift.
The camera module makers invest a lot of effort to design modules where there is no
cross correlation between the axis movements. In other words, the VCM actuator along
the X axis shouldn’t generate movements on the Y axis, and vice versa.
The correct placement of the Hall Sensors is equally important, since they have to
detect only the movement along the single axis where they are mounted. In case there is
cross-correlation on the axes’ movements, the control algorithm must take this into
account and introduce another input in the single-axis controller related to the reading
of the other axis’ Hall sensor.
Gyroscope
As stated, the gyroscope is the most important element in the control chain because it is
the reference used to retrieve the information on angular displacement. Both the
electrical and mechanical characteristics of the gyro should mark out the control
synthesis strategy and, in general, the overall OIS performance; for example, gyroscope
accuracy is a key feature that defines the performance of the entire system; it is
fundamental for controlling precision.
Other influential factors in the choice of a gyroscope for OIS are:
− phase delay: must be reduced to the minimum to avoid inserting a delay in control
loop timing;
− zero-rate offset: must be near zero in order to reduce the integration error;
− output data rate: must be higher than double the frequency of the system to be
controlled (oversampling);
− measurement range: up to ±250dps must be guaranteed;
− rate noise density: this parameter must be very low to maximize the signal accuracy;
− power consumption: must be extremely low both in normal mode and stand-by
mode to suit a mobile application.
DEPT OF COMPUTER ENGINEERING 28 MPTC, MALAPPURAM
IS IN SMARTPHONES
The gyroscope can be directly integrated inside the housing can of the camera module:
this guarantee the most faithful reference for detecting the same displacements applied
on the camera module. For this reason, one of the most important considerations of a
gyroscope suitable for OIS is its form factor, which must allow to be easily integrated
in the thinnest camera modules.
Finally, it’s critical that the gyroscope uses a robust and quick communication
peripheral to send angular data to the application processor, avoiding inserting noise or
further delay in the control-loop execution timing. For this reason, an SPI peripheral is
preferable to an I2C one for transferring data up to 6Mbit/sec to the MCU block.
Voice Coil Motor Driver and Hall Sensors position acquisition
Specific drivers are usually used for piloting lens movement of the VCM camera
module These mixed signal devices may be divided in two stages, usually integrated in
a unique IC: the driving stage and the acquiring one. The former houses two full H-
bridges (one per axis) and two Digital-to Analog Converters (DACs), suitable for
driving the VCM actuators (i.e. assimilating to an inductance) of the camera module
and producing a displacement of the lens.
The latter is equipped with two Analog-to-Digital Converters (ADCs) able to read the
Hall sensors on its own to retrieve the lens’ exact position.
On the driving stage, two operational modes are designed for driving VCM-based
camera module in the most accurate and efficient way:
− PWM-mode driving.
− Linear-mode driving.
Depending on the camera-module technology, the two modes contribute in different ways
to the performances of an OIS system; in fact, the first mode aims to manage the power
efficiency, while the second one is reduces the noise of the driven part. Concerning the first
mode, the selection of the PWM operative frequency is fundamental.
DEPT OF COMPUTER ENGINEERING 29 MPTC, MALAPPURAM
IS IN SMARTPHONES
In fact, when the PWM frequency is in the range of audio frequencies, the mechanical parts
of the camera module emit an audible whistle. Obviously, this aspect can be easily
overtaken if the second driving mode is chosen, although the power contribution of this
driving method could have a greater impact on the power consumption quotes. Specific
drivers are usually used for piloting lens movement of the VCM camera module and
acquiring its relative positions thought the Hall sensors embedded in the camera.
These mixed signal devices may be divided in two stages, usually integrated in a unique
IC: the driving stage and the acquiring one. The former houses two full H-bridges (one per
axis) and two Digital-to Analog Converters (DACs), suitable for driving the VCM
actuators (i.e. assimilating to an inductance) of the camera module and producing a
displacement of the lens. The latter is equipped with two Analog-to-Digital Converters
(ADCs) able to read the Hall sensors on its own to retrieve the lens’ exact position.
On the driving stage, two operational modes are designed for driving VCM-based
camera module in the most accurate and efficient way:
− PWM-mode driving;
− Linear-mode driving.
Depending on the camera-module technology, the two modes contribute in different
ways to the performances of an OIS system; in fact the first mode aims to manage the
power efficiency, while the second one is reduces the noise of the driven part.
Concerning the first mode, the selection of the PWM operative frequency is
fundamental. In fact, when the PWM frequency is in the range of audio frequencies, the
mechanical parts of the camera module emit an audible whistle. Obviously, this aspect
can be easily overtaken if the second driving mode is chosen, although the power
contribution of this driving method could have a greater impact on the power
consumption quotes. Another important feature characterizing the driving stage is anti-
ringing compensation. On the camera module, the lens is directly connected to a
mechanical support anchored by springs to the fixed chassis.
DEPT OF COMPUTER ENGINEERING 30 MPTC, MALAPPURAM
IS IN SMARTPHONES
These springs may cause mechanical oscillation when the VCM actuators operate and
may be read by the Hall sensor as a dampened ringing signal. The anti-ringing
compensation reduces the settling time for every lens position adjustment and avoids
any oscillation on the sudden changes of position.
Another important feature characterizing the driving stage is anti-ringing compensation. On
the camera module, the lens is directly connected to a mechanical support anchored by
springs to the fixed chassis. These springs may cause mechanical oscillation when the
VCM actuators operate and may be read by the Hall sensor as a dampened ringing signal.
The anti-ringing compensation reduces the settling time for every lens position adjustment
and avoids any oscillation on the sudden changes of position. The acquisition stage is
equally important in terms of accuracy, since it manages the data acquired from the Hall
sensors and provides the position of the camera module lens. Obviously, the Hall sensors
must be accurate in detecting the slightest movement of the lens.
For the same reasons described for the gyroscope, the driver must be equipped with an
SPI peripheral to be aligned with the common communication protocol of the other
main blocks of the OIS system.
Microcontroller
The microcontroller (MCU) operates independently as the application processor
cyclically executes several operations that constitute the routine of the firmware
application:
− It manages the communication with the two devices (gyroscope and driver), for
retrieving their data;
− it prepares and elaborates all the incoming information to adapt to the same
measurement unit;
− it executes the main algorithm for controlling the entire system;
− it tells the driver the new reference condition to be actuated on the camera module.
DEPT OF COMPUTER ENGINEERING 31 MPTC, MALAPPURAM
IS IN SMARTPHONES
All these operations, that constitute the main tasks of the control algorithm, impose two
important technical requirements on the MCU: computational power and
communication capabilities.
The MCU has to execute the routine in the fastest time possible, while managing 32-bit
floating-point variables, if necessary. On the other hand, higher computing capacity
capable of handling floating point may be too expensive for the application, so a 32-bit
ARM® Cortex™ based MCU could be a good choice:
This technology also offers other characteristics that fit perfectly with OIS requirements,
like small silicon area, low power consumption and minimal code footprint.
From the communication point of view, the MCU has to guarantee a stable link with
the gyro and driver as normal operation, but another communication channel with the
mobile baseband is also conceivable. So, in addition to the SPI peripherals necessary to
ensure communication with the gyro and the driver as described above, an additional
serial peripheral, also supporting a specific communication protocol, may be used to
enable communication with the mobile baseband.
6.6 OIS SYSTEM CHARACTERISTICS
The most important specifications that define the OIS system are:
− the accuracy of the controlled actuation (i.e. driver).
− the controller resolution and the loop frequency.
− the precision of the sensing part (i.e. gyroscope).
Every single piece must fit the specifications prescribed by the application to contribute
to the perfect operation of the system. To explain how all the features of the individual
blocks can define and affect OIS performance, let’s manage a camera module with
8MPxl CMOS sensor, where each pixel is around 1.5µm, and able to move up to ±
200µm along its axes X and Y in a barrel-shift mechanical topology.
DEPT OF COMPUTER ENGINEERING 32 MPTC, MALAPPURAM
IS IN SMARTPHONES
The pixel size provides a first level indication for the controller definition. Indeed, to
guarantee excellent precision, control accuracy equal to at least one tenth of a pixel
(0.15µm) is the target; this represents the smallest shift that the controller should be
able to correct. Starting from this data, we can define the resolution of the controlled
actuation (i.e. driver resolution).
OIS performance evaluation
To evaluate OIS performance, the same methods for the lens-performance appraisal in
DSCs can be used since they are based on an estimate on the image taken.
Yet, evaluation of camera performance is based on a comparison between the target
and an image, while two images of the target are taken with OIS OFF and OIS ON to
compare.
The technology characterizes OIS-enabled modules both for the driving method and for
position data sensors, greatly influencing the performance of the OIS system. The
actuator for mobile phone cameras may be built in different technologies as adaptive
liquid lens (LL), shape memory alloy (SMA), or a piezo-electric motor. Today, the
most widespread actuators are based on the Voice Coil Motor (VCM).
Voice coil actuation exploits the interaction between a current-carrying coil winding
and the field generated by a permanent magnet; the coil and the magnet, one in front of
the other, are attached to two sides of the camera-module housing can.
One of the most widely used among methods for lens performance evaluation is based
on the Modulation Transfer Function (MTF),
In image stabilization, it’s essential to estimate the difference between the stabilized
image and the un-stabilized one. Image sharpness is the fundamental quality factor for
defining OIS performance because it shows how much OIS reduces image blurring.
To better explain how the MTF is measured, it’s useful to describe the factors that
characterize perceived sharpness, taking into consideration the black and white bands
called Pattern, where black is represented as 0 and white as 255.
DEPT OF COMPUTER ENGINEERING 33 MPTC, MALAPPURAM
IS IN SMARTPHONES
If an OIS system equipped with a camera module with high quality lens that is capable
of minimizing the optical distortion of the captured image, the resolution is closely
connected to the choice of the CMOS sensor whereas the acutance depends on the
suppression rate that the controller applies on the jitter effect. So, the acutance is the
main characteristic of sharpness which can be directly managed with the control. For
OIS evaluation analysis.
MTF is mainly related to the acutance estimation. As the stripes get closer together, the
visible difference between bars decreases and edges start to blur into each other.
The plot changes from a periodic signal to the average value of 127. As previously
described, Four zones at four different frequencies are distinguishable; the MTF is
closer to 1 (or to 100%) for low frequencies (zone 1), while it drops to 20% when bars
are almost indistinguishable (zone 4). The higher the MTF, the sharper the image. In
OIS-performance evaluation, the higher the MTF, the lower the blurring, and the better
the image stabilization.
DEPT OF COMPUTER ENGINEERING 34 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 7
EIS (ELECTRONIC IMAGE STABILIZATION)
Electronic Image Stabilization (EIS) is a highly effective method of compensating for
hand jitter that manifests itself in distracting video shake during playback. EIS relies on
an accurate motion sensor for tracking the source of jitter, which may be hand shake or
vehicle motion for example. The motion information is then integrated during the
current video frame and used to compensate for it by cropping the viewable image from
a stream of video frames thru the imaging pipeline.
A key factor to successful compensation in an open OS system such as Android is
consistent alignment of motion information to its corresponding video frame, as the
motion pipeline and imaging pipeline are independent subsystems within the OS. High
performance motion sensors have a unique frame sync input that allows very accurate
alignment with video frames, essentially synchronizing the two pipelines in the system.
EIS Software IP takes advantage of this accurate synchronization to provide OEMs
with a world-class EIS solution with repeatable and consistent performance regardless
of image sensors used, reducing time to market.
Since EIS does not rely on nor is limited by mechanical compensation of optical methods,
it can track and compensate for large input motion in sports-oriented applications for
phones or action cameras. The large degree of compensation can even be used in drones to
replace an expensive gimbal normally used to keep the camera steady during flight. The
lack of mechanical complexity not only reduces the cost of the system; it also significantly
improves reliability of the end devices in all applications.
EIS minimizes blurring and compensates for device shake, often a camera. More
technically, this technique is referred to as pan and slant, which is the angular
movement corresponding to pitch and yaw. The EIS technique may be applied to
image-stabilized binoculars, still/video cameras, telescopes and smartphones. EIS
corrects the device shaking, normally resulting in noticeable image jittering within
each frame of video or each still image.
DEPT OF COMPUTER ENGINEERING 35 MPTC, MALAPPURAM
IS IN SMARTPHONES
Camera shaking is particularly tricky with still cameras, especially when using slow
shutter speeds and/or telephoto lenses. Telescopic lens-shake issues in astronomy
accumulate depending on gradual atmospheric variations, which invariably lead to
visibly altered object positions.
EIS cannot prevent blur from subject movement or extreme camera shaking, but it is
engineered to minimize blur from normal handheld lens shaking. Certain cameras and
lenses are built with more aggressive active modes and/or secondary panning features.
The image is stabilized when the image captured by the image sensor is being
processed. The image processor traces the data from the image sensor in real-time. The
data is analysed using different algorithms detecting and measuring any changes in the
recorded image. If the algorithm identifies a shift as a shake, a transformation (usually
image shift) is initiated to compensate the shake.
For the EIS system to effectively compensate the shake and shift the image within specific
limits, additional regions must be created at its edges using one of two methods.
DEPT OF COMPUTER ENGINEERING 36 MPTC, MALAPPURAM
IS IN SMARTPHONES
7.1 WORKING OF EIS
The first involves digitally zooming in the central section of the image. A scene
recorded in a single frame is part of a whole image, and its position on the image can
be changed as required. If the content of the specific part of the image shifts in relation
to the next image frame, the section borders also shift. As a result, the pixel that shifted
in the image sensor is stored without shifting in the new coordinate system. The system
can result in a slight decrease in the camera field of view, which can be observed after
the system is enabled.
The differences in the image are analysed between the frames, and the image is usually
divided into zones. If a shake is observed in a certain zone, it will be interpreted as a
movement of the recorded subject. If the movement covers a significant part of the
image, it is identified as a background movement. As a result, the algorithm interprets
the movement as a shake and compensates it as necessary. The system can easily
misinterpret large moving subjects in the frame as a shake. In this case, more time is
required for the algorithm to recognize if the subject moves or if the camera shakes.
The second method is similar, however, it is hardware-based. The image sensor has a
region at its edges that is not used for recording images in normal operation. After
detecting a shake, the coordinates of the centre of the region capturing the image move
correspondingly.
DEPT OF COMPUTER ENGINEERING 37 MPTC, MALAPPURAM
IS IN SMARTPHONES
1 - Camera movement up
2 - Camera movement down
3 - Camera movement up and right
4 - Image plan
5 - Camera movement
The efficiency of the EIS system depends on the efficiency of the detection methods.
Introduction of high-resolution cameras has brought significant benefits in the video
surveillance system design. Higher resolutions provide more image details and allow
analysis of selected regions of interest. This approach is often based on telephoto
lenses. As a result, the image quality may be reduced, and the details are lost due to
motion and shakes.
EIS may require some compromises but is cheap to implement. Electronic image
stabilization decreases the resolution of the obtained image which is somewhat a
secondary product - a result of an algorithm - in which contrast, sharpness, and field of
view are often affected. A degree to which the image quality deteriorates depends on
the amplitude of shakes compensated by the EIS system. With relatively large focal
lengths and in typical conditions, it should not be noticeable to an average user. An
advantage of the electronic system is high speed and no mechanical parts in its most
basic version. The system does not affect the weight and dimensions of the device.
EIS is an image stabilization technique that analyses the image of about 2/3 of the area
on the image sensor, and then use the images at the edges to compensate for camera
movements. Most cameras use this method.
Electronic image stabilization analyses each frame for movement and shifts them pixel-
by-pixel to produce a stable video. It can also be done in post-processing software ,
such as adobe premiere pro’s Warp stabilizer.
DEPT OF COMPUTER ENGINEERING 38 MPTC, MALAPPURAM
IS IN SMARTPHONES
7.2 Algorithm for jitter sensing and stabilization
Our algorithm for hybrid-camera-based digital video stabilization consists of the
following processes. In the steps of feature point extraction and feature point matching,
we used the same algorithms as those used in real-time image mosaicking using an
HFR video, considering the implementation of parallelized gradient-based feature
extraction on an FPGA-based high-speed vision platform.
Feature point detection
The Harris corner feature , λ(xx,tk) = detC(xx,tk)−κ ( Tr C(xx,tk)) 2λ (xx,tk) = det C
(xx,tk) – κ ( Tr C(xx,tk)) 2 at time tk tk, is computed using the following gradient matrix:
C(xx,tk)=∑xx∈Na(xx)[I′2x(xx,tk)I′x(xx,tk)I′y(xx,tk)I′x(xx,tk)I′y(xx,tk)I′2y(xx,tk)],
where Na(xx) is the a×a adjacent area of pixel xx=(x,y). tk=kΔt indicates when the
input image I(xx,t) at frame k is captured by a high-speed vision system operating at a
frame cycle time of Δt. I′x(xx,t) and I′y(xx,t) indicate the positive values of x and y
differentials of the input image I(xx,t) at pixel xx at time t, Ix(xx,t) and Iy(xx,t),
respectively. κ is a tunable sensitive parameter, and values in the range 0.04–0.15 have
been reported as feasible.
The number of feature points in the p×pp×p adjacent area of xxxx is computed as the
density of feature points by thresholding λ(xx,tk) λ (xx,tk) with a threshold λTλT as
follows:
P(xx,tk)=∑xx′∈Np(xx) R(xx′,tk),
R(xx,tk)={10(λ(xx,tk)>λT)(otherwise)
where R(xx,t) is a map of feature points.
Closely crowded feature points are excluded by counting the number of feature points
in the neighbourhood. The reduced set of feature points is calculated as
R′(tk)={xx|P(xx,tk)≤P0} by thresholding P(tk) with a threshold P0.
DEPT OF COMPUTER ENGINEERING 39 MPTC, MALAPPURAM
IS IN SMARTPHONES
Feature point matching
To enable correspondence between feature points at the current time tk and those at the
previous time tk−1=(k−1)Δt, template matching is conducted for all the selected feature
points in an image.
To enable the correspondence of the i-th feature point at time tk−1 belonging to
R′(tk−1), xxi(tk−1) (1≤i≤M), to the i′-th feature point at time tk belonging to R′(tk),
xxi′(tk) (1≤i′≤M), the sum of squared differences is calculated in the window Wm of
m×m pixels as follows:
E(i′,i;t,tk−1)=∑ξ=(ξ,η)∈Wm∥I(xxi′(t)+ξ,t)−I(xxi(tk−1)+ξ,tk−1)∥2.
where i′(i) and i(i′) are the index numbers of the feature point at time tk corresponding
to xxi(tk−1), and that at time tk−1 corresponding to xxi′(tk), respectively. According to
mutual selection of the corresponding feature points, the pair of feature points between
time tk and tk−1. where fi(tk)fi(tk) indicates whether there are feature points at time tk
tk or not, corresponding to the i-th feature point xxi(tk−1)xxi(tk−1) at time tk−1t k−1.
On the assumption that the frame-by-frame image-displacement between time tk and
tk−1 is small, the feature point xxi(tk) at time tk is matched with a feature point at time
tk−1 in the b×b adjacent area of xxi(tk); the computational load of feature point
matching is reduced in the order of O(M) by setting a narrowed search range. For all
the feature points belonging to R′(tk−1) and R′(tk), the processes described in Eqs. (4)–
(7) are conducted, and M′(tk)(≤M) pairs of feature points are selected for jitter sensing,
where M′(tk)=∑Mi=1fi(tk).
Jitter sensing
Assuming that the image-displacement between time tk and tk−1 is translational
motion, the velocity vv(tk) at time tk is estimated by averaging the positions of selected
pairs of feature points as follows:
vv(tk)=1Δt⋅1M′(tk)∑i=1Mfi(tk)(xx~i(tk)−xxi(tk−1))
DEPT OF COMPUTER ENGINEERING 40 MPTC, MALAPPURAM
IS IN SMARTPHONES
Jitter displacement dd(tk)dd(tk) is computed at time tktk by accumulating the estimated
velocity vv(tk)vv(tk) as follows:
dd(tk)=dd(tk−1)+vv(vk−1)⋅Δt,
where the displacement at time t=t0=0 is initially set to dd(t0)=dd(0)=00. The high-
frequency component of jitter displacement ddcut(tk), which is the camera jitter
movement intended for removal is extracted using the following high-pass IIR filter,
ddcut(tk)= IIR (ddk,ddk−1,…,ddk−D;fcut)
where the order of the IIR filter is D; it is designed to exclude the low-frequency
component of velocity lower than a cut-off frequency Fcut.
7.2.4 Composition of jitter-compensated image sequences
When the high-resolution input image I′(xx′,t′k′) at frame k′ is captured at time
t′k′=k′Δt′ by a high-resolution camera operating at a frame cycle time of Δt′, which is
much larger than that of the high-speed vision system, Δt, the stabilized high-resolution
image S(xx′,t′k′) is composed by displacing I′(xx′,t′k′) with the high-frequency
component of jitter displacement ddcut(t^′k′)
S(xx′,t′k′)=I′(xx′−l⋅ddcut(t^′k′),t′k′), where xx′=lxx indicates the image coordinate
system of the high-resolution camera; its resolution is l times that of the high-speed
vision system. t^′k′ is the time when the high-speed vision system captures its image
at the nearest frame after time t′k′ when the high-resolution camera captures its image
as follows:
t^′k′=⌈t′k′Δt⌉Δt,
where ⌈a⌉⌈a⌉ indicates the minimum integer, which is larger than a.
In this way, video stabilization of high-resolution image sequences can be achieved in
real time by image composition using input sequences based on a high-frequency-
displacement component sensed by executing the high-speed vision system as an HFR
jitter sensor.
DEPT OF COMPUTER ENGINEERING 41 MPTC, MALAPPURAM
IS IN SMARTPHONES
7.3 WAVELET-EIS
Generally, the sight system mounted on the vehicle requires high stabilization function,
which removes all of the translational and rotational motion disturbances under
stationary or non-stationary conditions. For eliminating the disturbances, which come
from vehicle engine, cooling pan, and irregular terrains, the conventional system adopts
just 2-axes mechanical stabilization using accelerometers, gyros, or inertial sensors
because the 3-axes mechanical stabilization is too bulky and expensive. However, in
case of the 2-axes mechanical stabilization system, the uncompensated roll component
of the unwanted motion is presence, which causes the deteriorated performance of the
object detection and recognition. This shortcoming has led to only the use of DIS for
the roll motion, which was not mechanically compensated in the 2-axes stabilization
system. The DIS is the process of generating the compensated video sequences where
any and all unwanted motion is removed from the original input.
Recently, several studies on the EIS have been presented such as global motion estimation
using local motion vectors , motion estimation based on edge pattern matching
, fast motion estimation based on bit-plane and Gray-coded bit-plane matching, and phase
correlation-based global motion estimation. These approaches can only correct the
translational movement. But they produce poor performance when the image fluctuation
contains rotational motion dominantly scheme has been presented for the rotational motion
estimation. However, it needs to predetermine prominent features such as the horizon.
Change proposed digital image translational and rotational motion stabilization using
optical flow technique. The algorithm estimates the global rotation and translation from the
estimation of angular frequency and rotational centre. However, it contains time-
consuming process since it finds the rotational centre by searching basis, which is not
appropriate to the real time application. Accordingly, we proposed a new wavelet-based
DIS algorithm on the rotational motion estimation for the stabilization system.
DEPT OF COMPUTER ENGINEERING 42 MPTC, MALAPPURAM
IS IN SMARTPHONES
First, for finding the translational motion vector that is local motion vector, proposed
algorithm is used FtC MRME. Second, we estimate rotational motion vector that
represents centre and angular frequency by using the local motion component that are
vertical and horizontal motion vectors of decomposition level 2 in wavelet domain.
Meanwhile, the global motion field is defined as the rotational centre and the angular
frequency. The estimation of rotational centre is achieved from the zero-crossing points
of directions of vertical and horizontal motion vectors obtained by FtC MRME with
block matching (BM). Then, rotational angle is computed from the special subset of
motion vector. Finally, the motion compensation process is achieved by the bilinear
interpolation method. The experimental results show the improved stabilization
performance compared with the conventional EIS algorithm.
The Proposed Algorithm
The proposed efficient digital image stabilization algorithm based on wavelet transform
is basically composed of two main modules: motion estimation by using FtC MRME
and motion compensation.
Motion Estimation
In order to get the true translational local motion information, BM, as one of the several
motion estimation techniques, is easy to be implemented and is used in the
conventional image compression standard. However, this simple method is prone to fall
into local minima, instead of desired global minima. Overcome to this problem can be
done by using a FtC approach in wavelet domain. This approach exploits the multi-
resolution property of the wavelet transform. Here, the initial motion estimation is
executed in the pixel domain. In the other word, the motion vectors at the finest level of
the wavelet transform are first estimated using the conventional motion estimation
algorithm based on BM. Then, scale and refine that at coarser resolutions. Therefore,
we achieved accuracy motion estimation without local minima.
DEPT OF COMPUTER ENGINEERING 43 MPTC, MALAPPURAM
IS IN SMARTPHONES
In this technique, because accurate motion estimation are formed at the finest
resolution and then scaled to coarser resolutions in the encoded process, these motion
estimates better track the true motion and exhibit low entropy, providing high quality,
both visually and quantitatively. We show the structure of wavelet transform and FtC
MRME method in After FtC MRME, for selection rotational centre point
7.3 EIS WITH HFR VIDEO PROCESSING
In which an HFR vision system can simultaneously estimate apparent translational
motion in image sequences as an HFR jitter sensor and is hybridized to assist for
compensating high-resolution image sequences. We developed a hybrid-camera system
for real-time high-resolution video stabilization that can simultaneously stabilize
2048×2048 2048× 2048 images captured at 80 fps by executing frame-by-frame feature
point tracking in real time at 1000 fps on a 512×512 512×512 HFR vision system. Its
performance was demonstrated by the experimental results for several moving scenes.
DEPT OF COMPUTER ENGINEERING 44 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 8
AIS (ARTIFICIAL IMAGE STABILIZATION)
AIS is an image stabilization technology that has emerged in the last 5 or four years, led
by Huawei in 2017 april. AIS is the integration of AI algorithm in the traditional image
stabilization algorithm, integrating the advantages of EIS and OIS (low cost, good
stabilization effect). Through the AI algorithm, it estimates the movement of the lens.
So has a smoother and more accurate posture, achieving a better video stabilization
effect. The recently released Honor X10 not only integrates the 40 million RYYB
super-sensitivity camera system, but also uses AIS smart image stabilization
technology. The latter not only effectively solves the dark light shooting problem but
also brings an excellent video stabilization effect.
In order to verify the effect of the AIS smart image stabilization effect of the Honor
X10, when testing, the author specifically ‘scrambled’ and deliberately shaken it in
hand. By this, he was simulating the shooting scenes if walking or running.
It is not difficult to see from the real shooting effect, even in the case of a large
‘handshake’, the AIS smart image stabilization of the Honor X10 still accurately
captures the position of the lens through the AI algorithm. Also, it adjusts the video
screen in real-time, easily obtaining a very stable video recording. In addition to AIS
smart image stabilization, the Honor X10 also supports 4K time-lapse photography,
960 frames of slow motion, and smart video editing functions.
The Honor X10 40MP high-sensitivity lens supports the exclusive Owl 2.0 algorithm.
In addition to the dark light, the night scene mode works like a professional. AIS
handheld Super Night Scene 2.0 records the night city exquisitely. There is also a 30-
second professional mode long exposure shooting to capture the beauty of the vast
galaxy. The Honor X10 adds fast snapping capability and AI tracking technology. It
can automatically identify the shooting scene and follow the focus according to the
movement status.
DEPT OF COMPUTER ENGINEERING 45 MPTC, MALAPPURAM
IS IN SMARTPHONES
8.1 ADVANTAGE OF AIS OVER OTHER
Every place AI is adapting to our habits by continuous learning and giving best
behaviour of system. Now a days smartphones are shipping with AI chips with extra
processing power. They are dedicated to adapting us and continuously improve for
better results.AI stabilization takes the benefit from AI chip. Remember this is an
added processing advantage over traditional stabilization (OIS and EIS) setup. At one
place EIS works on hardcoded logic with constant algorithm, AI stabilization will
continuously perform minute changes in algorithm and image processing. This will
help in reduce in blurriness at each zooming level.
AI is the modified EIS system which utilizes AI chip’s processing power. Both are
nothing but piece of software manipulating image data, but AI can do that job better
due to its dynamic behaviour.
8.2 AI STABILIZATION AT DIFFERENT ZOOM LEVELS
If you are standing at one position OIS provides the master class stabilization. At 1x
scene tend to be too wide to visualize the shakiness with OIS.
AI stabilization at 10x hybrid zoom:
Huawei went crazy with zooming department, same goes for stabilization as well. On
10x hybrid zoom, AIS work pretty much in standard way. Compensating the hand
movement and adapting the future movement and keeping the data for current
movement. Combining all these a 3d space is created which calculates the
countermoves and adjusts the frame within.
AI stabilization at 50x:
From the sample videos what we have learned that even on 50x the masterclass
performance of AIS added with OIS gives a vibration free output. Now I think here the
AI might learn the vibration types. The stability level of the person who is holding the
camera, from that it can give the desired results. Not sure that for improving AI
stabilization capabilities shakiness or movement data is shared from the devices, only
to make the stabilization better. It is just a guess, not 100% true.
DEPT OF COMPUTER ENGINEERING 46 MPTC, MALAPPURAM
IS IN SMARTPHONES
8.3 AI STABILIZATION BENEFITS IN NIGHT PHOTOGRAPHY
Even with cannon 50MM 1.8 at 1sec exposure image looks so blurry if zoomed 50%. It
is due to large exposure time. DSLR’s are not blessed with the modern high
performing CPU’s and flexible information processing units like smartphones. Same
goes to stabilization but when it is night time it becomes very difficult to get a
reference data in low light to perform the stabilization.
AI stabilization is so effective that even for 6 seconds we can hold the hand and final
output will be crisper, that to the data processing and the dedicated AI chips.
It mainly similar to EIS .Both are nothing but piece of software manipulating image
data, but AI can do that job better due to its dynamic behaviour. it make AIS unique
DEPT OF COMPUTER ENGINEERING 47 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 9
OIS VS EIS VS AIS
As far as the competition is considered, OIS is the best way of stabilizing the footage,
but one larger bumps OIS won’t be able to compensate the shakes. EIS is the cheap OIS
but cropping the image is downside of this system but if well implemented then can be
great. Although Jitter is always there between the frames. EIS doesn't require extra
hardware, and the camera module remains lightweight and affordable. The system is
mostly found on low-end smartphones. The implementation depends on the software
tuning, and the final results can differ from one manufacturer to another. For instance,
Google only used EIS in the first-generation Pixel, and we all know how good it turned
out on the photography front. OIS is a must-have in modern smartphones. Most mid-
range offerings from Samsung, OnePlus, and Google come with an OIS on the main
camera. OIS adds more weight to the camera, and it's more expensive, too. High-end
smartphones like S22 Ultra have OIS hardware in the main as well as a telephoto
camera for steady results in zoom shots. Vivo, one of the leading OEMs in China, uses
a gimble camera stabilization system that's complicated, heavy, expensive, and more
effective than traditional OIS systems. These days, most flagship offerings have hybrid
image stabilization (HIS), which combines both OIS and EIS to offer an all-around
solution. If you are looking to get one, check our buyers' guide to pick the best Android
phone for you. AIS tend to resolve all those issues with OIS and EIS, I mean the jitter
free frames, less motion blur aka adaptive motion blur, adapting the situation and
performing stabilization. If bumpier more cropping with high frame rates, or as plain
areas controlling the CPU usage with less processing performing minute stabilization.
The possibilities with AI stabilization is countless, we have to see how companies will
implement these tech and deliver to us. So, this was the analysis of AI stabilization
huawei p series cantered. In future more companies should put effort to implement this
artificial intelligence stabilization system to kill the gimbles.
DEPT OF COMPUTER ENGINEERING 48 MPTC, MALAPPURAM
IS IN SMARTPHONES
CHAPTER 10
CONCLUSION
Image stabilization is one of several important things that make a good mobile camera.
Optical Image Stabilization is a mature technology that minimizes blur caused by
camera shake. For a long time, it’s been the essential feature of professional cameras
and digital still cameras. More recently, thanks to mobile technology evolution, it has
rapidly become an essential feature across flagship smartphones. Commonly all phone
comes with EIS and performance depend on brandOIS comes in only high end devices
AIS comes with OIS enabled in Huawei devices. May be in coming future there will be
update in AIS. Smartphones are not ergonomically designed for photography. As a
result, they can be tricky to hold and, which may lead to shaky shots. Image
stabilization helps with this by countering minor shaky hand movements.
DEPT OF COMPUTER ENGINEERING 49 MPTC, MALAPPURAM
IS IN SMARTPHONES
REFERENCES
[1] Smartphonephotographer.com
[2] Android authority.com
[3] Wikipedia
[4] www.jpo.go.jp
DEPT OF COMPUTER ENGINEERING 50 MPTC, MALAPPURAM

More Related Content

Similar to salman report.docx

Survey Paper for Different Video Stabilization Techniques
Survey Paper for Different Video Stabilization TechniquesSurvey Paper for Different Video Stabilization Techniques
Survey Paper for Different Video Stabilization TechniquesIRJET Journal
 
Video Stabilization using Python and open CV
Video Stabilization using Python and open CVVideo Stabilization using Python and open CV
Video Stabilization using Python and open CVIRJET Journal
 
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSFACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
 
Motion detection system
Motion detection systemMotion detection system
Motion detection systemWritingHubUK
 
Design and Development of Smart Wheelchair for Physically Disable people
Design and Development of Smart Wheelchair for Physically Disable peopleDesign and Development of Smart Wheelchair for Physically Disable people
Design and Development of Smart Wheelchair for Physically Disable peopleIRJET Journal
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET Journal
 
Croma_key_Report.pdf
Croma_key_Report.pdfCroma_key_Report.pdf
Croma_key_Report.pdfGauriHadgekar
 
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...TerrordroneCod1
 
Mobile CMOS Image Sensor Test System through Image Processing Technique
Mobile CMOS Image Sensor Test System through Image Processing TechniqueMobile CMOS Image Sensor Test System through Image Processing Technique
Mobile CMOS Image Sensor Test System through Image Processing Techniqueijtsrd
 
IRJET- Smart Surveillance Cam using Face Recongition Alogrithm
IRJET-  	  Smart Surveillance Cam using Face Recongition AlogrithmIRJET-  	  Smart Surveillance Cam using Face Recongition Alogrithm
IRJET- Smart Surveillance Cam using Face Recongition AlogrithmIRJET Journal
 
Motion capture for Animation
Motion capture for AnimationMotion capture for Animation
Motion capture for AnimationIRJET Journal
 
Multimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxMultimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxAROCKIAJAYAIECW
 
8 k extremely high resolution camera system
8 k extremely high resolution camera system8 k extremely high resolution camera system
8 k extremely high resolution camera systemPrejith Pavanan
 
Blur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyBlur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyEditor IJCATR
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNIRJET Journal
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technologyharini501
 

Similar to salman report.docx (20)

Tech Review Genl
Tech Review GenlTech Review Genl
Tech Review Genl
 
Survey Paper for Different Video Stabilization Techniques
Survey Paper for Different Video Stabilization TechniquesSurvey Paper for Different Video Stabilization Techniques
Survey Paper for Different Video Stabilization Techniques
 
Video Stabilization using Python and open CV
Video Stabilization using Python and open CVVideo Stabilization using Python and open CV
Video Stabilization using Python and open CV
 
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSFACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
 
Motion detection system
Motion detection systemMotion detection system
Motion detection system
 
Design and Development of Smart Wheelchair for Physically Disable people
Design and Development of Smart Wheelchair for Physically Disable peopleDesign and Development of Smart Wheelchair for Physically Disable people
Design and Development of Smart Wheelchair for Physically Disable people
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
 
Croma_key_Report.pdf
Croma_key_Report.pdfCroma_key_Report.pdf
Croma_key_Report.pdf
 
0032.pdf
0032.pdf0032.pdf
0032.pdf
 
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...
BATCH 4 DESIGN OF OPTICAL SYSTEM FOR SMART PHONE BASED FUNDUS CAMERA- PROJECT...
 
Mobile CMOS Image Sensor Test System through Image Processing Technique
Mobile CMOS Image Sensor Test System through Image Processing TechniqueMobile CMOS Image Sensor Test System through Image Processing Technique
Mobile CMOS Image Sensor Test System through Image Processing Technique
 
IRJET- Smart Surveillance Cam using Face Recongition Alogrithm
IRJET-  	  Smart Surveillance Cam using Face Recongition AlogrithmIRJET-  	  Smart Surveillance Cam using Face Recongition Alogrithm
IRJET- Smart Surveillance Cam using Face Recongition Alogrithm
 
Dip sdit 7
Dip sdit 7Dip sdit 7
Dip sdit 7
 
Motion capture for Animation
Motion capture for AnimationMotion capture for Animation
Motion capture for Animation
 
Multimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docxMultimodel Operation for Visually1.docx
Multimodel Operation for Visually1.docx
 
8 k extremely high resolution camera system
8 k extremely high resolution camera system8 k extremely high resolution camera system
8 k extremely high resolution camera system
 
Essay
EssayEssay
Essay
 
Blur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A SurveyBlur Detection Methods for Digital Images-A Survey
Blur Detection Methods for Digital Images-A Survey
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNN
 
Motion capture technology
Motion capture technologyMotion capture technology
Motion capture technology
 

Recently uploaded

Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 

Recently uploaded (20)

Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 

salman report.docx

  • 1. IMAGE STABILIZATION IN SMARTPHONES A SEMINAR REPORT Submitted by Mr. AHMED SALMAN P.S (20131573) To The department of technical education Government of Kerala in partial fulfillment of the requirements for the award of Diploma in COMPUTER ENGINEERING DEPARTMENT OF COMPUTER ENGINEERING MA’DIN POLYTECHNIC COLLEGE MELMURI.P.O, MALAPPURAM JUNE 2022-2023
  • 2. DEPARTMENT OF COMPUTER ENGINEERING MA’DIN POLYTECHNIC COLLEGE MELMURI.P.O, MALAPPURAM CERTIFICATE This is to certify that the SEMINAR REPORT entitled D3.JS Submitted by AHMED SALMAN (20131573) to the Department of Technical Education Government of Kerala towards partial fulfilment of the requirements for the award of the Diploma in Computer Engineering is a confide record of the work carried out by her under my supervision and guidance. HEAD OF SECTION: Mrs. ANJALI. C LECTURE IN CHARGE: Mrs. SILPA PRADEEPKUMAR (Department seal)
  • 3. IS IN SMARTPHONES DECLARATION I hereby declare that the Seminar Report entitled IMAGE STABILIZATION IN SMARTPHONES which is being submitted to the MPTC Malappuram under Technical Educational Department in Computer Engineering is a benefited report of the work carried out by me. The material contains in this report as to been submitted to any institute or university for the award of any degree. . Place: Malappuram Date: 28-10-2022 DEPT OF COMPUTER ENGINEERING 1 MPTC, MALAPPURAM
  • 4. IS IN SMARTPHONES ACKNOWLEDGEMENT I give all honor and praise to the GOD who gave me wisdom and enabled me to complete my seminar on "IMAGE STABILISATION IN SMARTPHONES" successfully. Then I express my sincere thanks to, Mr. ABDUL HAMEED C.P, Principal, MA’DIN polytechnic college Malappuram, who permitted to make use of the college facilities such as the Software Labs, Internet etc at maximum possible extent. I take this opportunity to express my whole hearted thanks to Mrs. ANJALI. C, Head of the Department, and Computer Engineering for her continuous support and the inspiration given during the study and analysis of the seminar topic. I am deeply indebted to Mrs. RINCY.P, Mrs. SILPA PRADEEPKUMAR, seminar coordinators, for their support in my seminar. They helped me to overcome several constraints during the development of this seminar. Last, I express my humble gratitude and thanks to all my teachers and other faculty members of the Department of Computer Engineering, for their sincere and friendly cooperation in completing this seminar. DEPT OF COMPUTER ENGINEERING 2 MPTC, MALAPPURAM
  • 5. IS IN SMARTPHONES ABSTRACT Image Stabilization (IS) technology has been considered essential to delivering improved image quality in professional cameras. More recently, as a result of advancing technology, IS has become increasingly popular to handheld device makers who want to propose high-end features for their products. For image stabilization to significantly improve camera shutter speed and to offer precise suppression of camera vibration. Today. Three methods, electronic image stabilization (EIS) and Optical image stabilization (OIS), Artificial image stabilization (AIS) in smartphones. Whether capturing still images or recording moving video, image stabilization will always be a major factor in reproducing a near perfect digital. While media capturing devices such as digital cameras, digital camcorders, mobile phones, and tablets have decreased in physical size, their requirements for pixel count density and resolution quality have increased drastically over the last decade and will continue to rise. The market shift to compact mobile devices with high megapixel capturing ability has created a demand for advanced stabilization techniques DEPT OF COMPUTER ENGINEERING 3 MPTC, MALAPPURAM
  • 6. IS IN SMARTPHONES CONTENTS CHAPTER TITLE PAGE NO 1 INTRODUCTION 5 2 HISTORY 6 3 IMAGE STABILIZATION TECHNIQUES 8 3.1 OPTICAL IMAGE STABILIZATION 8 3.2 ELECTRONIC IMAGE STABILIZATION 8 3.3 ARTIFICIAL IMAGE STABILIZATION 9 4 APPLICATION IN STILL PHOTOGRAPHY 10 5 IMAGE STABILIZATION PRINCIPLES 11 6 OIS (OPTICAL IMAGE STABILIZATION) 12 6.1 OIS: FEATURES AND BENEFITS 13 6.2 OIS TYPES 15 6.3 OIS BEHAVIOR 19 6.4 OIS SYSTEM CHARACTERISTICS 32 7 EIS (ELECTRONIC IMAGE STABILI ZATION) 35 7.1 WORKING OF EIS 37 7.3 WAVELET-EIS 42 8 AIS (ARTIFICIAL IMAGE STABILIZATION) 45 8.1 ADVANTAGE OF AIS OVER OTHER 46 8.2 AI STABILIZATION AT DIFFERENT LEVELS 46 8.3 AI STABILIZATION BENEFITS IN NIGHT 47 9 OIS VS EIS VS AIS 48 10 CONCLUSION 49 11 REFERENCES 50 DEPT OF COMPUTER ENGINEERING 4 MPTC, MALAPPURAM
  • 7. IS IN SMARTPHONES CHAPTER 1 INTRODUCTION Image Stabilization (IS) technology has been considered essential to delivering improved image quality in professional cameras. More recently, as a result of advancing technology, IS has become increasingly popular to handheld device makers who want to propose high-end features for their products. So, manufacturers like ST have worked hard on its technologies and methods for image stabilization to significantly improve camera shutter speed and to offer precise suppression of camera vibration. Today, from the technologic point of view, Digital Image Stabilization (DIS), Electronics Image Stabilization (EIS) and Optical Image Stabilization (0IS) are the best understood and the easiest to integrate in digital still cameras and smartphones, though they can produce different image- quality results: in fact, DIS and EIS require large memory and computational resources on the hosting devices, while 01S acts directly on the lens position itself and minimizes memory and computation demands on from the host. As an electro-mechanical method, lens stabilization (optical unit) is the most effective method for removing blurring effects from involuntary hand motion or shaking of the camera. Whether capturing still images or recording moving video, image stabilization will always be a major factor in reproducing a near perfect digital replica. A lack thereof will result in image distortion through pixel blurring and the creation of unwanted artifacts. While media capturing devices such as digital cameras, digital camcorders, mobile phones, and tablets have decreased in physical size, their requirements for pixel count density and resolution quality have increased drastically over the last decade and will continue to rise. The market shift to compact mobile devices with high megapixel capturing ability has created a demand for advanced stabilization techniques. Two methods, electronic image stabilization (EIS) and optical image stabilization (OIS), are the most common implementations. DEPT OF COMPUTER ENGINEERING 5 MPTC, MALAPPURAM
  • 8. IS IN SMARTPHONES CHAPTER 2 HISTORY Dr. Oshima invented a basic technology for image stabilization in 1983, and five years later successfully commercialized the world’s first video camera to feature the technology. It was in 1988 that my new invention was used to launch the first video camera products featuring an IS function. Vibrating gyroscopes had been rather unstable, but we improved their structure to the point that we could mass produce them for the first time anywhere in the world. After that, the invention immediately spread worldwide, and consequently, my invention is now employed in all digital camera image stabilizers. Every time I see beginners taking professional-level clear pictures or videos with no image blur, I feel really glad to have brought such a helpful invention to the public. And, also much to my unexpected delight, the vibrating gyro that we invented was also modified to be widely used for car navigation systems and stable vehicle travel. When I look back at this now, I never felt distressed even when encountering any of the many technological hurdles, or when people didn’t understand my invention and refused to use it in our products. The reason was that the feeling of excitement of doing what no-one had ever done was much more powerful. A gyroscope is capable of precisely measuring the slightly rotated angle of an object in the air (e.g., an airplane) and is commonly used to determine the attitude of an airplane. The micro-miniature version of a gyro is a vibrating gyro. Camera-motion image-blur results from rotation,” I realized. A moment later, I made the mental connection between this camera jitter and our vibrating gyroscopes; that is, it occurred to me that image blur could be eliminated by measuring the rotation angle of a camera with a vibrating gyro and correcting the image accordingly.” I immediately started to research this idea. “Blur” was less of a problem at this time because a camera body itself was still heavy, so I started with basic research; i.e. analysing exactly what was considered as an annoying blur. DEPT OF COMPUTER ENGINEERING 6 MPTC, MALAPPURAM
  • 9. IS IN SMARTPHONES After a year of efforts, I could not conclude that my invention would enable image stabilization, so the development of a prototype also ended up being terminated. Being unwilling to let it go, however, I continued to work on the project on my own at night after work. Then one day at an exhibition, I happened to see a laser display (using visible lasers that use the three primary colours of light (red-green-blue) to display characters and graphics), and I noticed that a certain part of the display could be used for image stabilization. Later, using the part of that laser display I had seen, I finally finished my long-sought prototype for image stabilization. I remember the moment when I first turned on the prototype camera with nervous excitement. Even with a shake of the camera, the image did not blur at all. It was too good to be true! That was the most wonderful moment in my life. Nevertheless, obstacles to success remained. My colleagues responded by saying “Is it really needed?” or “Will it be a big seller?” They were reluctant to acknowledge the significance of image stabilization. So I loaded the prototype camera onto a helicopter and took pictures of Osaka Castle from the air. The prototype enabled such clear pictures to be taken that even the faces of the people in the castle tower could be recognized. These new images were quite obviously different from those significantly blurred images taken with other cameras. I was therefore finally given a green light to develop IS-related products. DEPT OF COMPUTER ENGINEERING 7 MPTC, MALAPPURAM
  • 10. IS IN SMARTPHONES CHAPTER 3 IMAGE STABILIZATION TECHNIQUES There are two types of techniques 1. Optical image stabilization 2. Electronic image stabilization 3. Artificial image stabilization 3.1OPTICAL IMAGE STABILIZATION An optical image stabilization system usually relies on gyroscopes or accelerometers to detect and measure camera vibrations. The readings, typically limited to pan and tilt, are then relayed to actuators that move a lens in the optical chain to compensate for the camera motion. In some designs, the favoured solution is instead to move the image sensor, for example using small linear motors. Either method is able to compensate the shaking of camera and lens, so that light can strike the image sensor in the same fashion as if the camera was not vibrating. Optical image stabilization is particularly useful when using long focal lengths and works well also in low light conditions. Optical image stabilization is used to reduce blurring associated with motion and/or shaking of the camera during the time the image sensor is exposed to the capturing environment. However, it does not prevent motion blur caused by movement of the target subject or extreme movements of the camera itself, only the relatively small shaking of the camera lens by the user — within a few optical degrees. This camera- user movement can be characterized by its pan and tilt components, where the angular movements are known as yaw and pitch, respectively. Camera roll cannot be compensated since 'rolling' the lens doesn't change/ compensate for the roll motion, and therefore does not have any effect on the image itself, relative to the image sensor. 3.2ELECTRONIC IMAGE STABILIZATION EIS is a digital image compensation technique which uses complex algorithms to compare frame contrast and pixel location for each changing frame. DEPT OF COMPUTER ENGINEERING 8 MPTC, MALAPPURAM
  • 11. IS IN SMARTPHONES Pixels on the image border provide the buffer needed for motion compensation. An EIS algorithm calculates the subtle differences between each frame and then the results are used to interpolate new frames to reduce the sense of motion. Though the advantage with this method is the ability to create inexpensive and compact solutions, the resulting image quality will always be reduced due to image scaling and image signal postprocessing artifacts and more power will be required for taking additional image captures and for the resulting image processing .EIS systems also suffer when at full electronic zoom (long field-of-view) and under low-light conditions. Electronic image stabilization, also known as digital image stabilization, has primarily been developed for video cameras. Electronic image stabilization relies on different algorithms for modelling camera motion, which then are used to correct the images. Pixels outside the border of the visible image are used as a buffer for motion and the information on these pixels can then be used to shift the electronic image from frame to frame, enough to counterbalance the motion and create a stream of stable video. Although the technique is cost efficient, mainly because there is no need for moving parts, it has one shortcoming which is its dependence on the input from the image sensor. For instance, the system can have difficulties in distinguishing perceived motion caused by an object passing quickly in front of the camera from physical motion induced by vibrations. 3.3ARTIFICIAL IMAGE STABILIZATION Nowadays smartphones are shipping with AI chips with extra processing power. They are dedicated to adapting us and continuously improve for better results.AI stabilization takes the benefit from AI chip. Remember this is an added processing advantage over traditional stabilization (OIS and EIS) setup. At one place EIS works on hardcoded logic with constant algorithm, AI stabilization will continuously perform minute changes in algorithm and image processing. This will help in reduce in blurriness at each zooming level. In another words it is combination EIS and OIS. DEPT OF COMPUTER ENGINEERING 9 MPTC, MALAPPURAM
  • 12. IS IN SMARTPHONES CHAPTER 4 APPLICATION IN STILL PHOTOGRAPHY In photography, image stabilization can facilitate shutter speeds 2 to 5.5 stops slower (exposures 4 to 22+1 ⁄2 times longer), and even slower effective speeds have been reported.A rule of thumb to determine the slowest shutter speed possible for hand- holding without noticeable blur due to camera shake is to take the reciprocal of the 35 mm equivalent focal length of the lens, also known as the “1/mm rule”. For example, at a focal length of 125 mm on a 35 mm camera, vibration or camera shake could affect sharpness if the shutter speed is slower than 1 ⁄125 second. An image taken at 1 ⁄125 second speed with an ordinary lens could be taken at 1 ⁄15 or 1 ⁄8 second with an IS- equipped lens and produce almost the same qualityWhen calculating the effective focal length, it is important to take into account the image format a camera uses. For example, many digital SLR cameras use an image sensor that is 2 ⁄3, 5 ⁄8, or 1 ⁄2 the size of a 35 mm film frame. This means that the 35 mm frame is 1.5, 1.6, or 2 times the size of the digital sensor. The latter values are referred to as the crop factor, field-of-view crop factor, focal-length multiplier, or format factor. On a 2× crop factor camera, for instance, a 50 mm lens produces the same field of view as a 100 mm lens used on a 35 mm film camera .However, image stabilization does not prevent motion blur caused by the movement of the subject or by extreme movements of the camera. Image stabilization is only designed for and capable of reducing blur that results from normal, minute shaking of a lens due to hand-held shooting. Some lenses and camera bodies include a secondary panning mode or a more aggressive 'active mode', both described in greater detail below under optical image stabilization. Astrophotography makes much use of long-exposure photography, which requires the camera to be fixed in place. However, fastening it to the Earth is not enough, since the Earth rotates. The Pentax K-5 and K-r, when equipped with the O-GPS1 GPS accessory for position data, can use their sensor-shift capability to reduce the resulting star trails. Stabilization can be applied in the lens, or in the camera body. DEPT OF COMPUTER ENGINEERING 10 MPTC, MALAPPURAM
  • 13. IS IN SMARTPHONES CHAPTER 5 IMAGE STABILIZATION PRINCIPLES Image stabilization is used to reduce blurring associated with motion and/or shaking of the camera during the time the image sensor is exposed to the capturing environment. However, it does not prevent motion blur caused by movement of the target subject or extreme movements of the camera itself, only the relatively small shaking of the camera lens by the user – within a few optical degrees. This camera-user movement can be characterized by its pan and tilt components, where the angular movements are known as yaw and pitch, respectively. Camera roll cannot be compensated since 'rolling' the lens doesn't actually change/compensate for the roll motion, and therefore does not have any effect on the image itself, relative to the image sensor.EIS is a digital image compensation technique which uses complex algorithms to compare frame contrast and pixel location for each changing frame. Pixels on the image border provide the buffer needed for motion compensation. An EIS algorithm calculates the subtle differences between each frame and frames to reduce the sense of motion. Though the advantage with this method is the ability to create inexpensive and compact solutions, the resulting image quality will always be reduced due to image scaling and image signal postprocessing artifacts, and more power will be required for taking additional image captures and for the resulting image processing. EIS systems also suffer when at full electronic zoom (long field-of-view) and under low-light conditions. DEPT OF COMPUTER ENGINEERING 11 MPTC, MALAPPURAM
  • 14. IS IN SMARTPHONES CHAPTER 6 OIS (OPTICAL IMAGE STABILIZATION) OIS is an image stabilization system that enhances mobile photography by physically moving the lens module or image sensor to counteract any slight camera movement to avoid blurry photos. OIS specifically adds a gyroscope to sense the change of the lens posture and compensates the displacement caused by handshake by physically controlling the slightness of the lens together. OIS does not crop the picture. An optical image stabilizer (OIS) is a mechanism used in still or video cameras that stabilizes the recorded image by varying the optical path to the sensor. This technology is implemented in the lens itself, as distinct from in-body image stabilization (IBIS), which operates by moving the sensor as the final element in the optical path. The key element of all optical stabilization systems is that they stabilize the image projected on the sensor before the sensor converts the image into digital information. IBIS can have up to 5 axis of movement: X, Y, Roll, Yaw, and Pitch. IBIS has the added advantage of working with all lenses. Early on Image Stabilization was an unusual feature in traditional professional cameras and it became more common with the arrival of digital still camera (DSC) on the market. These devices have taken over the digital imaging system market and contributed to drive technological innovation in photography. Then in 2010, the mobile revolution started and the use of smartphones exploded. The technological evolution encouraged the major phone makers to propose innovative solutions in hardware design that allowed the integration of miniaturized photographic modules in mobile phones to take pictures with camera-like resolution. This revolution has led to the decline of compact DSCs, deposed by camera-equipped phones. Image Stabilization was a key feature, joining display dimension, Near Field Communication (NFC), the Wireless Charging and Fingerprint security options, in contributing to the success of smartphones and enabling the segmentation of high-end models to low-cost ones. OIS systems reduce image blurring without significantly sacrificing image quality, especially for low-light and long-range image capture. DEPT OF COMPUTER ENGINEERING 12 MPTC, MALAPPURAM
  • 15. IS IN SMARTPHONES Mobile camera modules have followed the same trend after their introduction in smartphones and handsets. From a marketing perspective, according to an IC Insights report, sales of stand-alone digital cameras are projected to decline at an annual average rate of -10.5% in the forecast period 2012-2017, while revenues for cell phone-camera integrated circuits are expected to rise by an annual rate of 9.0% in the same period. As a result of the introduction of these and other innovations, the smartphone market has seen remarkable growth and it is estimated that shipments will exceed 1 billion units in 2014 for the first time. 6.1 OIS: FEATURES AND BENEFITS Optical image stabilization prolongs the shutter speed possible for handheld photography by reducing the likelihood of blurring the image from shake during the same exposure time. For handheld video recording, regardless of lighting conditions, optical image stabilization compensates for minor shakes whose appearance magnifies when watched on a large display such as a television set or computer monitor. The DSC market has moved towards smaller sizes, lower weight and higher resolutions, much as mobile camera modules have followed the same trend after their introduction in smartphones and handsets. A big drawback to this development has been the impact of blurring, caused by involuntary motions, on image quality. In fact, lighter cameras produce greater blurring. In addition, the introduction of larger LCD displays has encouraged users to take pictures with outstretched arms, further increasing blurring. The introduction of Image Stabilization in several mobile platforms has been significant added value for photography lovers and especially for younger users, who replaced their traditional and bulky cameras with brand- new smartphones or had cameras available to record memories simply because those cameras were embedded in the mobile platform they were already carrying. Image Stabilization in smartphones enables pictures and video with quality comparable to digital still cameras in so many operating conditions. The request for Image Stabilization is increasing both in compact DSCs and in smartphones. DEPT OF COMPUTER ENGINEERING 13 MPTC, MALAPPURAM
  • 16. IS IN SMARTPHONES Picture blurring caused by hand jitter, a biological phenomenon occurring at a frequency below 20Hz, is even more evident in higher resolution cameras. In fact, in smaller resolution cameras the blurring may not exceeds one pixel, which is negligible; but in higher resolution ones it may impact many pixels, thus degrading image quality significantly. Optical Image Stabilization technology is an effective solution for minimizing the effects of involuntary camera shake or vibration. It senses the vibration on the hosting system and compensates for these camera movements to reduce hand-jitter effects. So, OIS captures sharp pictures at shutter speeds three, four, or five times slower than otherwise possible. The increase of the shutter opening time permits more brilliant and clear pictures in indoor or low-light conditions. The time during which the shutter remains open, regulates the amount of light captured by the image sensor. Of course, the longer the exposure time, the greater the potential for hand shaking to cause blurring. In the case of smartphones cameras, because of their small lens apertures and the material used to make unbreakable lenses, the amount of light that can enter and strike the image sensor is significantly less than that of a DSC. This requires a higher exposure time, with the obvious drawback of increasing the effect due to shaking hands. Sample: OIS OFF OIS ON in 5Mpixel camera DEPT OF COMPUTER ENGINEERING 14 MPTC, MALAPPURAM
  • 17. IS IN SMARTPHONES Besides the optical requirements, two main challenges in the development of OIS in smartphones are size and cost. The additional hardware required to implement OIS on camera-module technology, increases the total cost of camera, and increases the camera’s size. This runs counter to the constant market demand for smaller and thinner devices. 6.2 OIS TYPES Lens-based In Nikon and Canon's implementation, it works by using a floating lens element that is moved orthogonally to the optical axis of the lens using electromagnets. Vibration is detected using two piezoelectric angular velocity sensors (often called gyroscopic sensors), one to detect horizontal movement and the other to detect vertical movement. As a result, this kind of image stabilizer corrects only for pitch and yaw axis rotations, and cannot correct for rotation around the optical axis. Some lenses have a secondary mode that counteracts vertical-only camera shake. This mode is useful when using a panning technique. Some such lenses activate it automatically; others use a switch on the lens. To compensate for camera, shake in shooting video while walking, Panasonic introduced Power Hybrid OIS+ with five-axis correction: axis rotation, horizontal rotation, vertical rotation, and horizontal and vertical motion. Some Nikon VR-enabled lenses offer an "active" mode for shooting from a moving vehicle, such as a car or boat, which is supposed to correct for larger shakes than the "normal" mode. However, active mode used for normal shooting can produce poorer results than normal mode. This is because active mode is optimized for reducing higher angular velocity movements (typically when shooting from a heavily moving platform using faster shutter speeds), where normal mode tries to reduce lower angular velocity movements over a larger amplitude and timeframe. DEPT OF COMPUTER ENGINEERING 15 MPTC, MALAPPURAM
  • 18. IS IN SMARTPHONES Most manufacturers suggest that the IS feature of a lens be turned off when the lens is mounted on a tripod as it can cause erratic results and is generally unnecessary. Many modern image stabilization lenses (notably Canon's more recent IS lenses) are able to auto- detect that they are tripod-mounted (as a result of extremely low vibration readings) and disable IS automatically to prevent this and any consequent image quality reduction. The system also draws battery power, so deactivating it when not needed extends the battery charge. A disadvantage of lens-based image stabilization is cost. Each lens requires its own image stabilization system. Also, not every lens is available in an image-stabilized version. This is often the case for fast primes and wide-angle lenses. However, the fastest lens with image stabilisation is the Nocticron with a speed of f/1.2. While the most obvious advantage for image stabilization lies with longer focal lengths, even normal and wide- angle lenses benefit from it in low-light applications. Lens-based stabilization also has advantages over in-body stabilization. In low-light or low-contrast situations, the autofocus system (which has no stabilized sensors) is able to work more accurately when the image coming from the lens is already stabilized. [citation needed] In cameras with optical viewfinders, the image seen by the photographer through the stabilized lens (as opposed to in-body stabilization) reveals more detail because of its stability, and it also makes correct framing easier. This is especially the case with longer telephoto lenses. This advantage does not occur on compact system cameras, because the sensor output to the screen or electronic viewfinder would be stabilized. Sensor-Shift The sensor capturing the image can be moved in such a way as to counteract the motion of the camera, a technology often referred to as mechanical image stabilization. When the camera rotates, causing angular error, gyroscopes encode information to the actuator that moves the sensor. The sensor is moved to maintain the projection of the image onto the image plane, which is a function of the focal length of the lens being used. DEPT OF COMPUTER ENGINEERING 16 MPTC, MALAPPURAM
  • 19. IS IN SMARTPHONES Modern cameras can automatically acquire focal length information from modern lenses made for that camera. Minolta and Konica Minolta used a technique called Anti- Shake (AS) now marketed as Steady Shot (SS) in the Sony α line and Shake Reduction (SR) in the Pentax K-series and Q series cameras, which relies on a very precise angular rate sensor to detect camera motion. Olympus introduced image stabilization with their E-510 D-SLR body, employing a system built around their Supersonic Wave Drive. Other manufacturers use digital signal processors (DSP) to analyse the image on the fly and then move the sensor appropriately. Sensor shifting is also used in some cameras by Fujifilm, Samsung, Casio Exilim and Ricoh Caplio. The advantage with moving the image sensor, instead of the lens, is that the image can be stabilized even on lenses made without stabilization. This may allow the stabilization to work with many otherwise un-stabilized lenses, and reduces the weight and complexity of the lenses. Further, when sensor-based image stabilization technology improves, it requires replacing only the camera to take advantage of the improvements, which is typically far less expensive than replacing all existing lenses if relying on lens-based image stabilization. Some sensor-based image stabilization implementations are capable of correcting camera roll rotation, a motion that is easily excited by pressing the shutter button. No lens-based system can address this potential source of image blur. A by-product of available "roll" compensation is that the camera can automatically correct for tilted horizons in the optical domain, provided it is equipped with an electronic spirit level, such as the Pentax K-7/K-5 cameras. One of the primary disadvantages of moving the image sensor itself is that the image projected to the viewfinder is not stabilized. However, this is not an issue on cameras that use an electronic viewfinder (EVF). Since the image projected on that viewfinder is taken from the image sensor itself. Similarly, the image projected to a phase-detection autofocus system that is not part of the image sensor, if used, is not stabilized. DEPT OF COMPUTER ENGINEERING 17 MPTC, MALAPPURAM
  • 20. IS IN SMARTPHONES Some, but not all, camera-bodies capable of in-body stabilization can be pre-set manually to a given focal length. Their stabilization system corrects as if that focal length lens is attached, so the camera can stabilize older lenses, and lenses from other makers. This isn't viable with zoom lenses, because their focal length is variable. Some adapters communicate focal length information from the maker of one lens to the body of another maker. Some lenses that do not report their focal length can be retrofitted with a chip which reports a pre-programmed focal length to the camera body. Sometimes, none of these techniques work, and image-stabilization cannot be used with such lenses. In-body image stabilization requires the lens to have a larger output image circle because the sensor is moved during exposure and thus uses a larger part of the image. Compared to lens movements in optical image stabilization systems the sensor movements are quite large, so the effectiveness is limited by the maximum range of sensor movement, where a typical modern optically stabilized lens has greater freedom. Both the speed and range of the required sensor movement increase with the focal length of the lens being used, making sensor-shift technology less suited for very long telephoto lenses, especially when using slower shutter speeds, because the available motion range of the sensor quickly becomes insufficient to cope with the increasing image displacement. Dual Starting with the Panasonic Lumix DMC-GX8, announced in July 2015, and subsequently in the Panasonic Lumix DC-GH5, Panasonic, who formerly only equipped lens-based stabilization in its interchangeable lens camera system (of the Micro Four Thirds standard), introduced sensor-shift stabilization that works in concert with the existing lens-based system ("Dual IS"). In the meantime (2016), Olympus also offered two lenses with image stabilization that can be synchronized with the in-built image stabilization system of the image sensors of Olympus' Micro Four Thirds cameras ("Sync IS"). With this technology a gain of 6.5 f-stops can be achieved without blurred images. DEPT OF COMPUTER ENGINEERING 18 MPTC, MALAPPURAM
  • 21. IS IN SMARTPHONES This is limited by the rotational movement of the surface of the Earth, that fools the accelerometers of the camera. Therefore, depending on the angle of view, the maximum exposure time should not exceed 1⁄3 second for long telephoto shots (with a 35 mm equivalent focal length of 800 millimetres) and a little more than ten seconds for wide angle shots (with a 35 mm equivalent focal length of 24 millimetres), if the movement of the Earth is not taken into consideration by the image stabilization process. In 2015, the Sony E camera system also allowed combining image stabilization systems of lenses and camera bodies, but without synchronizing the same degrees of freedom. In this case, only the independent compensation degrees of the in-built image sensor stabilization are activated to support lens stabilisation. Canon and Nikon now have full-frame mirrorless bodies that have IBIS and also support each company's lens-based stabilization. Canon's first two such bodies, the EOS R and RP, do not have IBIS, but the feature was added for the more recent R5 and R6. All of Nikon's full-frame Z-mount bodies—the Z 6, Z 7, and the Mark II versions of both— have IBIS. However, its APS-C Z 50 lacks IBIS. 6.3 OIS BEHAVIOUR 0IS is a mechanical technique used in imaging devices to stabilize the recording image by controlling the optical path to the image sensor. The two main methods of 0IS in compact camera modules are implemented by either moving the position of the lens (lens shift) or the module itself (module tilt). Camera movements by the user can cause misalignment of the optical path between the focusing lens and centre of the image sensor. In an 0IS system using the lens shift method, only the lens within the camera module is controlled and used to realign the optical path to the centre of the image sensor. In contrast, the module tilt method controls the movement of the entire module, including the fixed lens and image sensor. module tilt allows for a greater range of movement compensation by the 0IS system, with the largest trade off being increased module height. DEPT OF COMPUTER ENGINEERING 19 MPTC, MALAPPURAM
  • 22. IS IN SMARTPHONES Minimal image distortion is also achieved with module tilt due to the fixed focal length between the lens and image sensor. Overall, in comparison to EIS, 0IS systems reduce image blurring without significantly sacrificing image quality. However, due to the addition of actuators and the need for power driving sources compared to no additional hardware with EIS, 0IS modules tend to be larger and as a result are more expensive to implement. Lens Shift Method Module Tilt Method Lens is moved Entire module is moved OIS Module Components An OIS system relies on a complete module of sensing, compensation, and control components to accurately correct for unwanted camera movement. This movement or vibration is characterized in the X/Y-plane, with yaw/ pan and pitch/tilt movements detected by different types of isolated sensors. The lens shift method uses Hall sensors for lens movement detection while the module tilt method uses photo reflectors to detect module movement. Both methods require a gyroscope in order to detect human movement. OIS controllers use gyroscope data within a lens target positioning circuit to predict where the lens needs to return in order to compensate for the user's natural movement. DEPT OF COMPUTER ENGINEERING 20 MPTC, MALAPPURAM
  • 23. IS IN SMARTPHONES With lens shift, Hall sensors are used to detect real-time X/Y locations of the lens after taking into consideration actuator mechanical variances and the influence of gravity. The controller uses separate internal servo system that combines the lens positioning data of the Hall sensors with the target lens position calculation from the gyroscope to calculate the exact driving power needed for the actuator to reposition the lens. With module tilt, the process is similar, but the module’s location is measured and repositioned instead of just the lens. With both methods, the new lens position realigns the optical path to the centre of the image sensor. OIS: the working principle and the specification In contrast to DIS, OIS doesn’t require post-processing algorithms on the captured frames. OIS controls the optical path between the target and the image sensor by moving mechanical parts of the camera itself: so, even if the camera shakes, the OIS ensures that light arriving to the image sensor does not change trajectory, since we can assume any pixel colour-value is the composition of a single cone of light. This is often the case for fast primes and wide-angle lenses. However, the fastest lens with image stabilisation is the Nocticron with a speed of f/1.2. Lens-based stabilization also has advantages over in-body stabilization. nly the independent compensation degrees of the in-built image sensor stabilization are activated to support lens stabilisation. The basic principle underlying OIS is simplified where the movement effects are amplified and represented on a single axis, for the sake of clarity. Let’s suppose we take a picture of a non-moving object in which the shutter remains open for a time interval equal to ∆t; if no compensation occurs, the involuntary rotation of the camera generates a distribution of the light cone, over a single pixel, splattered on a segment indicated in by. Clearly, this phenomenon occurs across the whole image sensor, causing a blurred image. Otherwise, when optical stabilization occurs, the lens moves opposite to the direction of the camera shake and the image results to be stabilized (i.e., the subject acquired in t1 coincides with image acquired in t0). DEPT OF COMPUTER ENGINEERING 21 MPTC, MALAPPURAM
  • 24. IS IN SMARTPHONES OIS compensation As stated above, the hand movements indicated were simplified to explain the compensating effect of the lens movements on the picture. In reality, hand tremors affect two axis (below figure), where the light cone generated by a single white LED is distributed in the two-dimensional space of the image sensor . Pictures captured at different shutter speeds (respectively at 1/8th, 1/4th, 1 and 2 seconds) without image stabilization. This clearly demonstrates that the interval time ∆t= t1- t0 captures more than a single rotation. In fact, the curves are the convolution of the LED light distribution. DEPT OF COMPUTER ENGINEERING 22 MPTC, MALAPPURAM
  • 25. IS IN SMARTPHONES Comparing these curves with segment A-B in Figure 5a makes clear hand tremor is not a predictive effect. Finally, we need to make clear that OIS can stabilize image blur due to photographer hand trembling, but it cannot compensate for blur caused by scene motion (below figure). Blur due to scene motion Physiological tremor define the OIS specification While the tremor is not a pathology, it is a common physiological phenomenon present in all humans. It’s an involuntary oscillatory movement of body parts directly generated by muscles during their activities when they contract and relax repetitively. From its nature, the physiological tremor is not so clearly visible to the naked eye and is independent of age though it may depend on the capability of the body muscles to maintain a position against the force of gravity. For example, standing up and holding a camera with outstretched arms definitely produces physiological tremor in arm muscles. The consequences of this phenomenon are visible as the blurring effect in pictures: this effect is what OIS aims to reduce. Also, for the tremor, as for many physical phenomena, statistical modelling has played an important role in understanding and identification of its characteristics. DEPT OF COMPUTER ENGINEERING 23 MPTC, MALAPPURAM
  • 26. IS IN SMARTPHONES An acquisition campaign has been conducted, over a representative population, to measure the handshake, identifying the spectrum characteristics and defining OIS specifications in amplitude and frequency. As results of the campaign, vibration has been identified as an oscillating signal with: • An amplitude typically less than 0.5 degrees. • A frequency compatible with the vital signs of humans, with a spectrum in the range 0-20 Hz. • The relevant results of the identification test on the tremor as a signal are graphically shown in angular rates measured on X and Y axis As described above, the technology used to build the camera module is a fundamental factor for OIS implementation. They increase the complexity of the camera modules, which introduces new fixed and moving items, increases its dimensions and cost. OIS contains actuators that are able to move the lens, and sensors, that allow following the position. The technology characterizes OIS-enabled modules both for the driving method and for position data sensors, greatly influencing the performance of the OIS system. The actuator for mobile phone cameras may be built in different technologies as adaptive liquid lens (LL), shape memory alloy (SMA), or a piezo-electric motor. Today, the most widespread actuators are based on the Voice Coil Motor (VCM). DEPT OF COMPUTER ENGINEERING 24 MPTC, MALAPPURAM
  • 27. IS IN SMARTPHONES Voice coil actuation exploits the interaction between a current-carrying coil winding and the field generated by a permanent magnet; the coil and the magnet, one in front of the other, are attached to two sides of the camera-module housing can. When a current is applied to the coil the interaction between the fixed and electrically generated magnetic fields by the coil generates a force that enables the camera body to move by a distance directly proportional to the current applied. Actually, in smartphones camera modules, the lens isn’t always the moving part. Depending on the architecture used to build the camera modules, there are two main methods for OIS compensation taking its name from the mechanical structures: • Barrel Shift (also called Lens Shift) where the image sensor is fixed to the bottom of the camera case and the lenses move with a translational movement. • Camera Tilt where the image sensor is integrated in the same body with the lenses, and both move angularly to compensate for involuntary shaking. Another important constructive aspect of the camera module is represented by the position sensors, which are fundamental to detecting the lens movements. These sensors can be placed inside the module in either of two approaches to retrieve the position information: •using Hall sensors, an approach mainly suitable for Barrel-Shift architecture. •using photo sensors, appropriate for the Camera-Tilt architecture. As previously stated, the dimensions of an OIS camera module depends on its structure and its technology and may be slightly more bulky when compared with those of fixed- lens camera modules. On the other hand, the integration in smartphones or handsets forces designers to reduce the size of the entire OIS system (camera module + electronics). Which should cover an area approximately equal size to 100mm2. A common trick usually used for integrating the OIS system and saving space in smartphone platforms. DEPT OF COMPUTER ENGINEERING 25 MPTC, MALAPPURAM
  • 28. IS IN SMARTPHONES It is to place both the camera module and the OIS circuitry on the same flexible PCB, then to fold it to arrange the devices alongside the camera module. Another important consideration related to the placement of the OIS system in the mobile is the identification of the reference system that defines the orientation of movements of the entire platform: this reference is determined by the gyroscope. The gyroscope is suited to measure the hand jitter better than the accelerometer because the blur effect caused by linear translations of camera is negligible in comparison with that caused by the component of angular rotation: this means that human jitter can best be measured by observing angular displacements. The gyro measures angular rates along its reference axes; these angular movements along axes are known as pitch, yaw and roll. roll refers to the rotations around the longitudinal axis (Z), pitch to rotational movements about the lateral axis (X), and yaw to rotations around the vertical axis (Y). If the gyroscope is integral to the camera module, it detects and measures the same movements of the camera. The module architectures described so far are capable of correcting the effects of camera shaking in two dimensions, pitch and yaw: in these two presented topologies it’s not possible to compensate for any roll motion because no translation or rotation of the lens along the axis perpendicular to the image sensor can contribute to correct it. An action able to compensate for roll can be made by the image sensor itself only if it is able to move freely inside the camera module. Although the camera-tilt architecture guarantees higher performance, its construction complexity has limited its adoption in commercial camera modules. Therefore, from this point, we will refer to a generic VCM-based Barrel-Shift architecture, built using Hall Sensors to manage the Lens- Shift compensation. 6.5 OIS ARCHITECTURE DESCRIPTION The electronic circuitry implementing the optical image stabilization, partially described in the previous Section 5, is composed by four main components: − A Gyroscope, able to sense the movements or the vibrations inflicted on the system; DEPT OF COMPUTER ENGINEERING 26 MPTC, MALAPPURAM
  • 29. IS IN SMARTPHONES − Hall Sensors, able to sense the lens movements, from within the camera module (as depicted in Figure 10a). − A Driver that performs two functions: it pilots the camera module into the right position, as calculated by a control algorithm, while, on the other hand, it retrieves the information on the camera-module position from the Hall sensors mounted inside it; − A Microcontroller that executes the control algorithm to correct for camera displacements. This architecture has a notable benefit: it permits a system to operate independent of the hosting mobile platform, which autonomously compensates the camera module, thus performing OIS. OIS Control-Loop block diagram, where “θyaw” and “θpitch” respectively indicate yaw and pitch angles. According to the Control-Loop block diagram (Figure 14), the angular rates detected by the gyroscope along two main axes (pitch and yaw) are integrated to achieve the relative angular displacements. The current position of the camera module is achieved through its Hall sensors, since these are compared to retrieve the error angle which is elaborated by the control algorithm of the on-board microcontroller to set the VCM actuators’ new positions. So the driver moves the camera in order to compensate for the involuntary jitter. DEPT OF COMPUTER ENGINEERING 27 MPTC, MALAPPURAM
  • 30. IS IN SMARTPHONES The control algorithm should also manage the pre-processing of the acquired signals and the Hall sensors calibration compensate for the temperature drift. The camera module makers invest a lot of effort to design modules where there is no cross correlation between the axis movements. In other words, the VCM actuator along the X axis shouldn’t generate movements on the Y axis, and vice versa. The correct placement of the Hall Sensors is equally important, since they have to detect only the movement along the single axis where they are mounted. In case there is cross-correlation on the axes’ movements, the control algorithm must take this into account and introduce another input in the single-axis controller related to the reading of the other axis’ Hall sensor. Gyroscope As stated, the gyroscope is the most important element in the control chain because it is the reference used to retrieve the information on angular displacement. Both the electrical and mechanical characteristics of the gyro should mark out the control synthesis strategy and, in general, the overall OIS performance; for example, gyroscope accuracy is a key feature that defines the performance of the entire system; it is fundamental for controlling precision. Other influential factors in the choice of a gyroscope for OIS are: − phase delay: must be reduced to the minimum to avoid inserting a delay in control loop timing; − zero-rate offset: must be near zero in order to reduce the integration error; − output data rate: must be higher than double the frequency of the system to be controlled (oversampling); − measurement range: up to ±250dps must be guaranteed; − rate noise density: this parameter must be very low to maximize the signal accuracy; − power consumption: must be extremely low both in normal mode and stand-by mode to suit a mobile application. DEPT OF COMPUTER ENGINEERING 28 MPTC, MALAPPURAM
  • 31. IS IN SMARTPHONES The gyroscope can be directly integrated inside the housing can of the camera module: this guarantee the most faithful reference for detecting the same displacements applied on the camera module. For this reason, one of the most important considerations of a gyroscope suitable for OIS is its form factor, which must allow to be easily integrated in the thinnest camera modules. Finally, it’s critical that the gyroscope uses a robust and quick communication peripheral to send angular data to the application processor, avoiding inserting noise or further delay in the control-loop execution timing. For this reason, an SPI peripheral is preferable to an I2C one for transferring data up to 6Mbit/sec to the MCU block. Voice Coil Motor Driver and Hall Sensors position acquisition Specific drivers are usually used for piloting lens movement of the VCM camera module These mixed signal devices may be divided in two stages, usually integrated in a unique IC: the driving stage and the acquiring one. The former houses two full H- bridges (one per axis) and two Digital-to Analog Converters (DACs), suitable for driving the VCM actuators (i.e. assimilating to an inductance) of the camera module and producing a displacement of the lens. The latter is equipped with two Analog-to-Digital Converters (ADCs) able to read the Hall sensors on its own to retrieve the lens’ exact position. On the driving stage, two operational modes are designed for driving VCM-based camera module in the most accurate and efficient way: − PWM-mode driving. − Linear-mode driving. Depending on the camera-module technology, the two modes contribute in different ways to the performances of an OIS system; in fact, the first mode aims to manage the power efficiency, while the second one is reduces the noise of the driven part. Concerning the first mode, the selection of the PWM operative frequency is fundamental. DEPT OF COMPUTER ENGINEERING 29 MPTC, MALAPPURAM
  • 32. IS IN SMARTPHONES In fact, when the PWM frequency is in the range of audio frequencies, the mechanical parts of the camera module emit an audible whistle. Obviously, this aspect can be easily overtaken if the second driving mode is chosen, although the power contribution of this driving method could have a greater impact on the power consumption quotes. Specific drivers are usually used for piloting lens movement of the VCM camera module and acquiring its relative positions thought the Hall sensors embedded in the camera. These mixed signal devices may be divided in two stages, usually integrated in a unique IC: the driving stage and the acquiring one. The former houses two full H-bridges (one per axis) and two Digital-to Analog Converters (DACs), suitable for driving the VCM actuators (i.e. assimilating to an inductance) of the camera module and producing a displacement of the lens. The latter is equipped with two Analog-to-Digital Converters (ADCs) able to read the Hall sensors on its own to retrieve the lens’ exact position. On the driving stage, two operational modes are designed for driving VCM-based camera module in the most accurate and efficient way: − PWM-mode driving; − Linear-mode driving. Depending on the camera-module technology, the two modes contribute in different ways to the performances of an OIS system; in fact the first mode aims to manage the power efficiency, while the second one is reduces the noise of the driven part. Concerning the first mode, the selection of the PWM operative frequency is fundamental. In fact, when the PWM frequency is in the range of audio frequencies, the mechanical parts of the camera module emit an audible whistle. Obviously, this aspect can be easily overtaken if the second driving mode is chosen, although the power contribution of this driving method could have a greater impact on the power consumption quotes. Another important feature characterizing the driving stage is anti- ringing compensation. On the camera module, the lens is directly connected to a mechanical support anchored by springs to the fixed chassis. DEPT OF COMPUTER ENGINEERING 30 MPTC, MALAPPURAM
  • 33. IS IN SMARTPHONES These springs may cause mechanical oscillation when the VCM actuators operate and may be read by the Hall sensor as a dampened ringing signal. The anti-ringing compensation reduces the settling time for every lens position adjustment and avoids any oscillation on the sudden changes of position. Another important feature characterizing the driving stage is anti-ringing compensation. On the camera module, the lens is directly connected to a mechanical support anchored by springs to the fixed chassis. These springs may cause mechanical oscillation when the VCM actuators operate and may be read by the Hall sensor as a dampened ringing signal. The anti-ringing compensation reduces the settling time for every lens position adjustment and avoids any oscillation on the sudden changes of position. The acquisition stage is equally important in terms of accuracy, since it manages the data acquired from the Hall sensors and provides the position of the camera module lens. Obviously, the Hall sensors must be accurate in detecting the slightest movement of the lens. For the same reasons described for the gyroscope, the driver must be equipped with an SPI peripheral to be aligned with the common communication protocol of the other main blocks of the OIS system. Microcontroller The microcontroller (MCU) operates independently as the application processor cyclically executes several operations that constitute the routine of the firmware application: − It manages the communication with the two devices (gyroscope and driver), for retrieving their data; − it prepares and elaborates all the incoming information to adapt to the same measurement unit; − it executes the main algorithm for controlling the entire system; − it tells the driver the new reference condition to be actuated on the camera module. DEPT OF COMPUTER ENGINEERING 31 MPTC, MALAPPURAM
  • 34. IS IN SMARTPHONES All these operations, that constitute the main tasks of the control algorithm, impose two important technical requirements on the MCU: computational power and communication capabilities. The MCU has to execute the routine in the fastest time possible, while managing 32-bit floating-point variables, if necessary. On the other hand, higher computing capacity capable of handling floating point may be too expensive for the application, so a 32-bit ARM® Cortex™ based MCU could be a good choice: This technology also offers other characteristics that fit perfectly with OIS requirements, like small silicon area, low power consumption and minimal code footprint. From the communication point of view, the MCU has to guarantee a stable link with the gyro and driver as normal operation, but another communication channel with the mobile baseband is also conceivable. So, in addition to the SPI peripherals necessary to ensure communication with the gyro and the driver as described above, an additional serial peripheral, also supporting a specific communication protocol, may be used to enable communication with the mobile baseband. 6.6 OIS SYSTEM CHARACTERISTICS The most important specifications that define the OIS system are: − the accuracy of the controlled actuation (i.e. driver). − the controller resolution and the loop frequency. − the precision of the sensing part (i.e. gyroscope). Every single piece must fit the specifications prescribed by the application to contribute to the perfect operation of the system. To explain how all the features of the individual blocks can define and affect OIS performance, let’s manage a camera module with 8MPxl CMOS sensor, where each pixel is around 1.5µm, and able to move up to ± 200µm along its axes X and Y in a barrel-shift mechanical topology. DEPT OF COMPUTER ENGINEERING 32 MPTC, MALAPPURAM
  • 35. IS IN SMARTPHONES The pixel size provides a first level indication for the controller definition. Indeed, to guarantee excellent precision, control accuracy equal to at least one tenth of a pixel (0.15µm) is the target; this represents the smallest shift that the controller should be able to correct. Starting from this data, we can define the resolution of the controlled actuation (i.e. driver resolution). OIS performance evaluation To evaluate OIS performance, the same methods for the lens-performance appraisal in DSCs can be used since they are based on an estimate on the image taken. Yet, evaluation of camera performance is based on a comparison between the target and an image, while two images of the target are taken with OIS OFF and OIS ON to compare. The technology characterizes OIS-enabled modules both for the driving method and for position data sensors, greatly influencing the performance of the OIS system. The actuator for mobile phone cameras may be built in different technologies as adaptive liquid lens (LL), shape memory alloy (SMA), or a piezo-electric motor. Today, the most widespread actuators are based on the Voice Coil Motor (VCM). Voice coil actuation exploits the interaction between a current-carrying coil winding and the field generated by a permanent magnet; the coil and the magnet, one in front of the other, are attached to two sides of the camera-module housing can. One of the most widely used among methods for lens performance evaluation is based on the Modulation Transfer Function (MTF), In image stabilization, it’s essential to estimate the difference between the stabilized image and the un-stabilized one. Image sharpness is the fundamental quality factor for defining OIS performance because it shows how much OIS reduces image blurring. To better explain how the MTF is measured, it’s useful to describe the factors that characterize perceived sharpness, taking into consideration the black and white bands called Pattern, where black is represented as 0 and white as 255. DEPT OF COMPUTER ENGINEERING 33 MPTC, MALAPPURAM
  • 36. IS IN SMARTPHONES If an OIS system equipped with a camera module with high quality lens that is capable of minimizing the optical distortion of the captured image, the resolution is closely connected to the choice of the CMOS sensor whereas the acutance depends on the suppression rate that the controller applies on the jitter effect. So, the acutance is the main characteristic of sharpness which can be directly managed with the control. For OIS evaluation analysis. MTF is mainly related to the acutance estimation. As the stripes get closer together, the visible difference between bars decreases and edges start to blur into each other. The plot changes from a periodic signal to the average value of 127. As previously described, Four zones at four different frequencies are distinguishable; the MTF is closer to 1 (or to 100%) for low frequencies (zone 1), while it drops to 20% when bars are almost indistinguishable (zone 4). The higher the MTF, the sharper the image. In OIS-performance evaluation, the higher the MTF, the lower the blurring, and the better the image stabilization. DEPT OF COMPUTER ENGINEERING 34 MPTC, MALAPPURAM
  • 37. IS IN SMARTPHONES CHAPTER 7 EIS (ELECTRONIC IMAGE STABILIZATION) Electronic Image Stabilization (EIS) is a highly effective method of compensating for hand jitter that manifests itself in distracting video shake during playback. EIS relies on an accurate motion sensor for tracking the source of jitter, which may be hand shake or vehicle motion for example. The motion information is then integrated during the current video frame and used to compensate for it by cropping the viewable image from a stream of video frames thru the imaging pipeline. A key factor to successful compensation in an open OS system such as Android is consistent alignment of motion information to its corresponding video frame, as the motion pipeline and imaging pipeline are independent subsystems within the OS. High performance motion sensors have a unique frame sync input that allows very accurate alignment with video frames, essentially synchronizing the two pipelines in the system. EIS Software IP takes advantage of this accurate synchronization to provide OEMs with a world-class EIS solution with repeatable and consistent performance regardless of image sensors used, reducing time to market. Since EIS does not rely on nor is limited by mechanical compensation of optical methods, it can track and compensate for large input motion in sports-oriented applications for phones or action cameras. The large degree of compensation can even be used in drones to replace an expensive gimbal normally used to keep the camera steady during flight. The lack of mechanical complexity not only reduces the cost of the system; it also significantly improves reliability of the end devices in all applications. EIS minimizes blurring and compensates for device shake, often a camera. More technically, this technique is referred to as pan and slant, which is the angular movement corresponding to pitch and yaw. The EIS technique may be applied to image-stabilized binoculars, still/video cameras, telescopes and smartphones. EIS corrects the device shaking, normally resulting in noticeable image jittering within each frame of video or each still image. DEPT OF COMPUTER ENGINEERING 35 MPTC, MALAPPURAM
  • 38. IS IN SMARTPHONES Camera shaking is particularly tricky with still cameras, especially when using slow shutter speeds and/or telephoto lenses. Telescopic lens-shake issues in astronomy accumulate depending on gradual atmospheric variations, which invariably lead to visibly altered object positions. EIS cannot prevent blur from subject movement or extreme camera shaking, but it is engineered to minimize blur from normal handheld lens shaking. Certain cameras and lenses are built with more aggressive active modes and/or secondary panning features. The image is stabilized when the image captured by the image sensor is being processed. The image processor traces the data from the image sensor in real-time. The data is analysed using different algorithms detecting and measuring any changes in the recorded image. If the algorithm identifies a shift as a shake, a transformation (usually image shift) is initiated to compensate the shake. For the EIS system to effectively compensate the shake and shift the image within specific limits, additional regions must be created at its edges using one of two methods. DEPT OF COMPUTER ENGINEERING 36 MPTC, MALAPPURAM
  • 39. IS IN SMARTPHONES 7.1 WORKING OF EIS The first involves digitally zooming in the central section of the image. A scene recorded in a single frame is part of a whole image, and its position on the image can be changed as required. If the content of the specific part of the image shifts in relation to the next image frame, the section borders also shift. As a result, the pixel that shifted in the image sensor is stored without shifting in the new coordinate system. The system can result in a slight decrease in the camera field of view, which can be observed after the system is enabled. The differences in the image are analysed between the frames, and the image is usually divided into zones. If a shake is observed in a certain zone, it will be interpreted as a movement of the recorded subject. If the movement covers a significant part of the image, it is identified as a background movement. As a result, the algorithm interprets the movement as a shake and compensates it as necessary. The system can easily misinterpret large moving subjects in the frame as a shake. In this case, more time is required for the algorithm to recognize if the subject moves or if the camera shakes. The second method is similar, however, it is hardware-based. The image sensor has a region at its edges that is not used for recording images in normal operation. After detecting a shake, the coordinates of the centre of the region capturing the image move correspondingly. DEPT OF COMPUTER ENGINEERING 37 MPTC, MALAPPURAM
  • 40. IS IN SMARTPHONES 1 - Camera movement up 2 - Camera movement down 3 - Camera movement up and right 4 - Image plan 5 - Camera movement The efficiency of the EIS system depends on the efficiency of the detection methods. Introduction of high-resolution cameras has brought significant benefits in the video surveillance system design. Higher resolutions provide more image details and allow analysis of selected regions of interest. This approach is often based on telephoto lenses. As a result, the image quality may be reduced, and the details are lost due to motion and shakes. EIS may require some compromises but is cheap to implement. Electronic image stabilization decreases the resolution of the obtained image which is somewhat a secondary product - a result of an algorithm - in which contrast, sharpness, and field of view are often affected. A degree to which the image quality deteriorates depends on the amplitude of shakes compensated by the EIS system. With relatively large focal lengths and in typical conditions, it should not be noticeable to an average user. An advantage of the electronic system is high speed and no mechanical parts in its most basic version. The system does not affect the weight and dimensions of the device. EIS is an image stabilization technique that analyses the image of about 2/3 of the area on the image sensor, and then use the images at the edges to compensate for camera movements. Most cameras use this method. Electronic image stabilization analyses each frame for movement and shifts them pixel- by-pixel to produce a stable video. It can also be done in post-processing software , such as adobe premiere pro’s Warp stabilizer. DEPT OF COMPUTER ENGINEERING 38 MPTC, MALAPPURAM
  • 41. IS IN SMARTPHONES 7.2 Algorithm for jitter sensing and stabilization Our algorithm for hybrid-camera-based digital video stabilization consists of the following processes. In the steps of feature point extraction and feature point matching, we used the same algorithms as those used in real-time image mosaicking using an HFR video, considering the implementation of parallelized gradient-based feature extraction on an FPGA-based high-speed vision platform. Feature point detection The Harris corner feature , λ(xx,tk) = detC(xx,tk)−κ ( Tr C(xx,tk)) 2λ (xx,tk) = det C (xx,tk) – κ ( Tr C(xx,tk)) 2 at time tk tk, is computed using the following gradient matrix: C(xx,tk)=∑xx∈Na(xx)[I′2x(xx,tk)I′x(xx,tk)I′y(xx,tk)I′x(xx,tk)I′y(xx,tk)I′2y(xx,tk)], where Na(xx) is the a×a adjacent area of pixel xx=(x,y). tk=kΔt indicates when the input image I(xx,t) at frame k is captured by a high-speed vision system operating at a frame cycle time of Δt. I′x(xx,t) and I′y(xx,t) indicate the positive values of x and y differentials of the input image I(xx,t) at pixel xx at time t, Ix(xx,t) and Iy(xx,t), respectively. κ is a tunable sensitive parameter, and values in the range 0.04–0.15 have been reported as feasible. The number of feature points in the p×pp×p adjacent area of xxxx is computed as the density of feature points by thresholding λ(xx,tk) λ (xx,tk) with a threshold λTλT as follows: P(xx,tk)=∑xx′∈Np(xx) R(xx′,tk), R(xx,tk)={10(λ(xx,tk)>λT)(otherwise) where R(xx,t) is a map of feature points. Closely crowded feature points are excluded by counting the number of feature points in the neighbourhood. The reduced set of feature points is calculated as R′(tk)={xx|P(xx,tk)≤P0} by thresholding P(tk) with a threshold P0. DEPT OF COMPUTER ENGINEERING 39 MPTC, MALAPPURAM
  • 42. IS IN SMARTPHONES Feature point matching To enable correspondence between feature points at the current time tk and those at the previous time tk−1=(k−1)Δt, template matching is conducted for all the selected feature points in an image. To enable the correspondence of the i-th feature point at time tk−1 belonging to R′(tk−1), xxi(tk−1) (1≤i≤M), to the i′-th feature point at time tk belonging to R′(tk), xxi′(tk) (1≤i′≤M), the sum of squared differences is calculated in the window Wm of m×m pixels as follows: E(i′,i;t,tk−1)=∑ξ=(ξ,η)∈Wm∥I(xxi′(t)+ξ,t)−I(xxi(tk−1)+ξ,tk−1)∥2. where i′(i) and i(i′) are the index numbers of the feature point at time tk corresponding to xxi(tk−1), and that at time tk−1 corresponding to xxi′(tk), respectively. According to mutual selection of the corresponding feature points, the pair of feature points between time tk and tk−1. where fi(tk)fi(tk) indicates whether there are feature points at time tk tk or not, corresponding to the i-th feature point xxi(tk−1)xxi(tk−1) at time tk−1t k−1. On the assumption that the frame-by-frame image-displacement between time tk and tk−1 is small, the feature point xxi(tk) at time tk is matched with a feature point at time tk−1 in the b×b adjacent area of xxi(tk); the computational load of feature point matching is reduced in the order of O(M) by setting a narrowed search range. For all the feature points belonging to R′(tk−1) and R′(tk), the processes described in Eqs. (4)– (7) are conducted, and M′(tk)(≤M) pairs of feature points are selected for jitter sensing, where M′(tk)=∑Mi=1fi(tk). Jitter sensing Assuming that the image-displacement between time tk and tk−1 is translational motion, the velocity vv(tk) at time tk is estimated by averaging the positions of selected pairs of feature points as follows: vv(tk)=1Δt⋅1M′(tk)∑i=1Mfi(tk)(xx~i(tk)−xxi(tk−1)) DEPT OF COMPUTER ENGINEERING 40 MPTC, MALAPPURAM
  • 43. IS IN SMARTPHONES Jitter displacement dd(tk)dd(tk) is computed at time tktk by accumulating the estimated velocity vv(tk)vv(tk) as follows: dd(tk)=dd(tk−1)+vv(vk−1)⋅Δt, where the displacement at time t=t0=0 is initially set to dd(t0)=dd(0)=00. The high- frequency component of jitter displacement ddcut(tk), which is the camera jitter movement intended for removal is extracted using the following high-pass IIR filter, ddcut(tk)= IIR (ddk,ddk−1,…,ddk−D;fcut) where the order of the IIR filter is D; it is designed to exclude the low-frequency component of velocity lower than a cut-off frequency Fcut. 7.2.4 Composition of jitter-compensated image sequences When the high-resolution input image I′(xx′,t′k′) at frame k′ is captured at time t′k′=k′Δt′ by a high-resolution camera operating at a frame cycle time of Δt′, which is much larger than that of the high-speed vision system, Δt, the stabilized high-resolution image S(xx′,t′k′) is composed by displacing I′(xx′,t′k′) with the high-frequency component of jitter displacement ddcut(t^′k′) S(xx′,t′k′)=I′(xx′−l⋅ddcut(t^′k′),t′k′), where xx′=lxx indicates the image coordinate system of the high-resolution camera; its resolution is l times that of the high-speed vision system. t^′k′ is the time when the high-speed vision system captures its image at the nearest frame after time t′k′ when the high-resolution camera captures its image as follows: t^′k′=⌈t′k′Δt⌉Δt, where ⌈a⌉⌈a⌉ indicates the minimum integer, which is larger than a. In this way, video stabilization of high-resolution image sequences can be achieved in real time by image composition using input sequences based on a high-frequency- displacement component sensed by executing the high-speed vision system as an HFR jitter sensor. DEPT OF COMPUTER ENGINEERING 41 MPTC, MALAPPURAM
  • 44. IS IN SMARTPHONES 7.3 WAVELET-EIS Generally, the sight system mounted on the vehicle requires high stabilization function, which removes all of the translational and rotational motion disturbances under stationary or non-stationary conditions. For eliminating the disturbances, which come from vehicle engine, cooling pan, and irregular terrains, the conventional system adopts just 2-axes mechanical stabilization using accelerometers, gyros, or inertial sensors because the 3-axes mechanical stabilization is too bulky and expensive. However, in case of the 2-axes mechanical stabilization system, the uncompensated roll component of the unwanted motion is presence, which causes the deteriorated performance of the object detection and recognition. This shortcoming has led to only the use of DIS for the roll motion, which was not mechanically compensated in the 2-axes stabilization system. The DIS is the process of generating the compensated video sequences where any and all unwanted motion is removed from the original input. Recently, several studies on the EIS have been presented such as global motion estimation using local motion vectors , motion estimation based on edge pattern matching , fast motion estimation based on bit-plane and Gray-coded bit-plane matching, and phase correlation-based global motion estimation. These approaches can only correct the translational movement. But they produce poor performance when the image fluctuation contains rotational motion dominantly scheme has been presented for the rotational motion estimation. However, it needs to predetermine prominent features such as the horizon. Change proposed digital image translational and rotational motion stabilization using optical flow technique. The algorithm estimates the global rotation and translation from the estimation of angular frequency and rotational centre. However, it contains time- consuming process since it finds the rotational centre by searching basis, which is not appropriate to the real time application. Accordingly, we proposed a new wavelet-based DIS algorithm on the rotational motion estimation for the stabilization system. DEPT OF COMPUTER ENGINEERING 42 MPTC, MALAPPURAM
  • 45. IS IN SMARTPHONES First, for finding the translational motion vector that is local motion vector, proposed algorithm is used FtC MRME. Second, we estimate rotational motion vector that represents centre and angular frequency by using the local motion component that are vertical and horizontal motion vectors of decomposition level 2 in wavelet domain. Meanwhile, the global motion field is defined as the rotational centre and the angular frequency. The estimation of rotational centre is achieved from the zero-crossing points of directions of vertical and horizontal motion vectors obtained by FtC MRME with block matching (BM). Then, rotational angle is computed from the special subset of motion vector. Finally, the motion compensation process is achieved by the bilinear interpolation method. The experimental results show the improved stabilization performance compared with the conventional EIS algorithm. The Proposed Algorithm The proposed efficient digital image stabilization algorithm based on wavelet transform is basically composed of two main modules: motion estimation by using FtC MRME and motion compensation. Motion Estimation In order to get the true translational local motion information, BM, as one of the several motion estimation techniques, is easy to be implemented and is used in the conventional image compression standard. However, this simple method is prone to fall into local minima, instead of desired global minima. Overcome to this problem can be done by using a FtC approach in wavelet domain. This approach exploits the multi- resolution property of the wavelet transform. Here, the initial motion estimation is executed in the pixel domain. In the other word, the motion vectors at the finest level of the wavelet transform are first estimated using the conventional motion estimation algorithm based on BM. Then, scale and refine that at coarser resolutions. Therefore, we achieved accuracy motion estimation without local minima. DEPT OF COMPUTER ENGINEERING 43 MPTC, MALAPPURAM
  • 46. IS IN SMARTPHONES In this technique, because accurate motion estimation are formed at the finest resolution and then scaled to coarser resolutions in the encoded process, these motion estimates better track the true motion and exhibit low entropy, providing high quality, both visually and quantitatively. We show the structure of wavelet transform and FtC MRME method in After FtC MRME, for selection rotational centre point 7.3 EIS WITH HFR VIDEO PROCESSING In which an HFR vision system can simultaneously estimate apparent translational motion in image sequences as an HFR jitter sensor and is hybridized to assist for compensating high-resolution image sequences. We developed a hybrid-camera system for real-time high-resolution video stabilization that can simultaneously stabilize 2048×2048 2048× 2048 images captured at 80 fps by executing frame-by-frame feature point tracking in real time at 1000 fps on a 512×512 512×512 HFR vision system. Its performance was demonstrated by the experimental results for several moving scenes. DEPT OF COMPUTER ENGINEERING 44 MPTC, MALAPPURAM
  • 47. IS IN SMARTPHONES CHAPTER 8 AIS (ARTIFICIAL IMAGE STABILIZATION) AIS is an image stabilization technology that has emerged in the last 5 or four years, led by Huawei in 2017 april. AIS is the integration of AI algorithm in the traditional image stabilization algorithm, integrating the advantages of EIS and OIS (low cost, good stabilization effect). Through the AI algorithm, it estimates the movement of the lens. So has a smoother and more accurate posture, achieving a better video stabilization effect. The recently released Honor X10 not only integrates the 40 million RYYB super-sensitivity camera system, but also uses AIS smart image stabilization technology. The latter not only effectively solves the dark light shooting problem but also brings an excellent video stabilization effect. In order to verify the effect of the AIS smart image stabilization effect of the Honor X10, when testing, the author specifically ‘scrambled’ and deliberately shaken it in hand. By this, he was simulating the shooting scenes if walking or running. It is not difficult to see from the real shooting effect, even in the case of a large ‘handshake’, the AIS smart image stabilization of the Honor X10 still accurately captures the position of the lens through the AI algorithm. Also, it adjusts the video screen in real-time, easily obtaining a very stable video recording. In addition to AIS smart image stabilization, the Honor X10 also supports 4K time-lapse photography, 960 frames of slow motion, and smart video editing functions. The Honor X10 40MP high-sensitivity lens supports the exclusive Owl 2.0 algorithm. In addition to the dark light, the night scene mode works like a professional. AIS handheld Super Night Scene 2.0 records the night city exquisitely. There is also a 30- second professional mode long exposure shooting to capture the beauty of the vast galaxy. The Honor X10 adds fast snapping capability and AI tracking technology. It can automatically identify the shooting scene and follow the focus according to the movement status. DEPT OF COMPUTER ENGINEERING 45 MPTC, MALAPPURAM
  • 48. IS IN SMARTPHONES 8.1 ADVANTAGE OF AIS OVER OTHER Every place AI is adapting to our habits by continuous learning and giving best behaviour of system. Now a days smartphones are shipping with AI chips with extra processing power. They are dedicated to adapting us and continuously improve for better results.AI stabilization takes the benefit from AI chip. Remember this is an added processing advantage over traditional stabilization (OIS and EIS) setup. At one place EIS works on hardcoded logic with constant algorithm, AI stabilization will continuously perform minute changes in algorithm and image processing. This will help in reduce in blurriness at each zooming level. AI is the modified EIS system which utilizes AI chip’s processing power. Both are nothing but piece of software manipulating image data, but AI can do that job better due to its dynamic behaviour. 8.2 AI STABILIZATION AT DIFFERENT ZOOM LEVELS If you are standing at one position OIS provides the master class stabilization. At 1x scene tend to be too wide to visualize the shakiness with OIS. AI stabilization at 10x hybrid zoom: Huawei went crazy with zooming department, same goes for stabilization as well. On 10x hybrid zoom, AIS work pretty much in standard way. Compensating the hand movement and adapting the future movement and keeping the data for current movement. Combining all these a 3d space is created which calculates the countermoves and adjusts the frame within. AI stabilization at 50x: From the sample videos what we have learned that even on 50x the masterclass performance of AIS added with OIS gives a vibration free output. Now I think here the AI might learn the vibration types. The stability level of the person who is holding the camera, from that it can give the desired results. Not sure that for improving AI stabilization capabilities shakiness or movement data is shared from the devices, only to make the stabilization better. It is just a guess, not 100% true. DEPT OF COMPUTER ENGINEERING 46 MPTC, MALAPPURAM
  • 49. IS IN SMARTPHONES 8.3 AI STABILIZATION BENEFITS IN NIGHT PHOTOGRAPHY Even with cannon 50MM 1.8 at 1sec exposure image looks so blurry if zoomed 50%. It is due to large exposure time. DSLR’s are not blessed with the modern high performing CPU’s and flexible information processing units like smartphones. Same goes to stabilization but when it is night time it becomes very difficult to get a reference data in low light to perform the stabilization. AI stabilization is so effective that even for 6 seconds we can hold the hand and final output will be crisper, that to the data processing and the dedicated AI chips. It mainly similar to EIS .Both are nothing but piece of software manipulating image data, but AI can do that job better due to its dynamic behaviour. it make AIS unique DEPT OF COMPUTER ENGINEERING 47 MPTC, MALAPPURAM
  • 50. IS IN SMARTPHONES CHAPTER 9 OIS VS EIS VS AIS As far as the competition is considered, OIS is the best way of stabilizing the footage, but one larger bumps OIS won’t be able to compensate the shakes. EIS is the cheap OIS but cropping the image is downside of this system but if well implemented then can be great. Although Jitter is always there between the frames. EIS doesn't require extra hardware, and the camera module remains lightweight and affordable. The system is mostly found on low-end smartphones. The implementation depends on the software tuning, and the final results can differ from one manufacturer to another. For instance, Google only used EIS in the first-generation Pixel, and we all know how good it turned out on the photography front. OIS is a must-have in modern smartphones. Most mid- range offerings from Samsung, OnePlus, and Google come with an OIS on the main camera. OIS adds more weight to the camera, and it's more expensive, too. High-end smartphones like S22 Ultra have OIS hardware in the main as well as a telephoto camera for steady results in zoom shots. Vivo, one of the leading OEMs in China, uses a gimble camera stabilization system that's complicated, heavy, expensive, and more effective than traditional OIS systems. These days, most flagship offerings have hybrid image stabilization (HIS), which combines both OIS and EIS to offer an all-around solution. If you are looking to get one, check our buyers' guide to pick the best Android phone for you. AIS tend to resolve all those issues with OIS and EIS, I mean the jitter free frames, less motion blur aka adaptive motion blur, adapting the situation and performing stabilization. If bumpier more cropping with high frame rates, or as plain areas controlling the CPU usage with less processing performing minute stabilization. The possibilities with AI stabilization is countless, we have to see how companies will implement these tech and deliver to us. So, this was the analysis of AI stabilization huawei p series cantered. In future more companies should put effort to implement this artificial intelligence stabilization system to kill the gimbles. DEPT OF COMPUTER ENGINEERING 48 MPTC, MALAPPURAM
  • 51. IS IN SMARTPHONES CHAPTER 10 CONCLUSION Image stabilization is one of several important things that make a good mobile camera. Optical Image Stabilization is a mature technology that minimizes blur caused by camera shake. For a long time, it’s been the essential feature of professional cameras and digital still cameras. More recently, thanks to mobile technology evolution, it has rapidly become an essential feature across flagship smartphones. Commonly all phone comes with EIS and performance depend on brandOIS comes in only high end devices AIS comes with OIS enabled in Huawei devices. May be in coming future there will be update in AIS. Smartphones are not ergonomically designed for photography. As a result, they can be tricky to hold and, which may lead to shaky shots. Image stabilization helps with this by countering minor shaky hand movements. DEPT OF COMPUTER ENGINEERING 49 MPTC, MALAPPURAM
  • 52. IS IN SMARTPHONES REFERENCES [1] Smartphonephotographer.com [2] Android authority.com [3] Wikipedia [4] www.jpo.go.jp DEPT OF COMPUTER ENGINEERING 50 MPTC, MALAPPURAM