SlideShare a Scribd company logo
1 of 78
A Project Report on
“Image Non-Uniformity Correction in Infrared Focal Plane Arrays-
A Study and Implementation”
Submitted by
V. Shobha
E & TC (Roll No. - 1104269)
Under the guidance of
Project Guide
M.Vinod Kumar
DGM (Assy), Milan
BDL, Kanchanbhag
School of Electronics Engineering,
KIIT University
(Established U/S of UGC Act)
Bhubaneswar, Odisha, India
Abstract
The IRFPA is part of the electro-optical system that combines an optic system, infrared sensor, a
high end Digital Signal Processor (DSP) and its associated electronics with display. An Infra-red
Focal Plane Array (IRFPA) is an array of detectors aligned at the focal plane of the EO system.
Every detector in the IRFPA can have different response under same stimulus, known as image
non-uniformity. This non-uniformity leads to the presence of a Fixed Pattern Noise (FPN) in the
resulting images, degrading the resolving capabilities of the EO system. In order to remove this
non-uniformity we are going for non-uniformity correction.
The goal of Non-uniformity Correction (NUC) is to reduce the magnitude of spatial noise below
the temporal noise level. This NUC is done using source transformation techniques using two-
point NU algorithm through software. Software code is developed in VisualDSP++ Integrated
Development & Debugging Environment (ID&DE)
The commonly used technique is two point calibration method in which FPA is calibrated at two
known temperatures using black body data. The gain and offset of the detector output are
calibrated across the array so that FPA produces uniform and radio metrically accurate output at
these two reference temperatures. The graphs of temperature vs. corrected output and
temperature vs. Root Mean Square Error (RMSE) are plotted showing the results of image Non-
uniformity Correction (NUC).
Contents
Sl
No.
Name Page No.
Abstract i
List of Figures v
List of Abbreviations vii
1. Introduction 1
1.1 Background 1
1.2 Choice of Digital Signal Processor 2
2 TigerSHARC Processor (TS101S) 6
2.1 General Description 7
2.2 Memory Map 8
2.2.1 On chip SRAM Memory 8
2.2.2 External Port 9
2.3 Booting 11
2.3.1 Selecting the boot mode 11
2.3.2 EPROM/FLASH Device Boot 13
2.3.3 Host Boot 14
2.3.4 Link Port Boot 14
2.3.5 No Boot Mode 15
2.4 VisualDSP++ (IDDE) 16
2.4.1 Code development tools 18
Sl
No
.
Name Page No.
2.4.2 Parts of User Interface 19
2.4.3 Create a new project 21
2.4.4 Add a Source file to the project 22
2.4.5 Build and Run the program 23
2.4.6 Open a Plot Window 24
3 Infrared Focal Plane Array 26
3.1 Basic Terms/Definitions 26
3.1.1 Integration Time 26
3.1.2 Responsivity of Detector 26
3.1.3 Pixel 26
3.1.4 Resolution 27
3.1.5 Read Out Integrated Circuit (ROIC) 27
3.1.6 Hybrid and Monolithic sensors 27
3.1.7 Fill factor of FPA 27
3.2 Non-Uniformity in IRFPA 27
3.3 Types of Non-Uniformity Correction techniques 28
3.3.1 Calibration/Source based techniques 28
3.3.2 Scene based techniques 29
3.4 Block Diagram of Test setup 29
3.4.1 Black body 29
3.4.2 IR Sensor 30
Sl
No.
Name Page No.
3.4.3 Cooling system 30
3.4.4 Test board 30
4 Two Point Non-Uniformity Correction 31
4.1 Calibration/Reference Temperatures 31
4.2 Sensor Non-Uniformity 32
4.3 Graphical representation of NUC 32
4.4 Two point NUC technique 34
4.6 Image Debugging and Analysis Software (IDEAS) 36
4.7 Operation of Test Board 38
4.8 Software Implementation 39
5 Observations and Results 42
5.1 Temperature vs. Corrected pixel output 42
5.2 Temperature vs. RMSE 44
5.3 Real time Examples 46
6 Conclusion & Future Scope 48
References
Appendix
List of Figures
No. Title Page No.
1.1 Von Neumann Architecture 2
1.2 Harvard Architecture 3
1.3 Super Harvard Architecture 3
1.4 Typical DSP Architecture (Analog Devices SHARC DSP) 4
2.1 Functional Block Diagram of TS101S 7
2.2 Memory Map 10
2.3 Single Processor Configuration 12
2.4 PROM Booting 13
2.5 IRQ3-0 Interrupt Vector Table 15
2.6 The VDSP++ Environment 17
2.7 The VDSP++ Environment icon 20
2.8 Step 4 in creating a project; select the processor type 21
2.9 Step 1 to Add a Source File 22
2.10 To build a project 23
2.11 To run a program 23
2.12 Plot Configuration dialog box 24
2.13 Plot settings dialog box 25
3.1 Block diagram of Test Setup 29
4.1 Two pixels output with different gain and offset 32
4.2 Two pixels output with gain compensated 33
4.3 Two pixels output with offset compensated 33
4.4 Two pixels output with gain and offset compensated 34
4.5 Image Debugging & Analysis Software Environment (Selecting the array size) 36
4.6 Image Debugging & Analysis Software Environment (Showing pixel intensities in
decimal)
37
5.1 Corrected pixel values for different integration times at different temperatures
(Expressions window)
42
5.2 Temperature vs. corrected pixel output 43
5.3 RMSE values for different integration times at different temperatures (Expressions
window)
44
5.4 Temperature vs. RMSE 45
5.5 Real time uncorrected (or) raw image 46
5.6 Uniform image is obtained after performing 2 point NUC algorithm 46
5.7 Example of two point NUC implementation 47
List of Abbreviations
ATR - Automatic Target Recognition
EO - Electro-optical sensor
NUC - Non-Uniformity Correction
VDSP - Visual Digital Signal Processing
FPN - Fixed Noise Pattern
IRFPA - Infrared Focal Plane Array
ID&DE- Integrated Development & Debugging Environment
FPA - Focal Plane Array
CPU - Central Processing Unit
ALU - Arithmetic Logic Unit
BGA - Ball Grid Array
SRAM - Static Random Access Memory
SIMD - Single Instruction Multiple Data
VLIW - Very Large Instruction Word
ROIC - Read Out Integrated Circuit
RNU - Residual Non-uniformities
IDEAS- Image Debugging and Analysis Software
LWIR - Long Wavelength Infra-Red
Acknowledgement
This project work would not be completed without the acknowledgement paid to all those who
helped us during our project work, whose constant guidance, support and encouragement have
crowned our efforts with success. It is a great pleasure to express our profound sense of gratitude
to our project guide Mr. Vinod Kumar, DGM (Assy) Milan, Bharat Dynamics Limited and our
project mentor Mr. D.Shashi Ratna, Sr. Manager, Bharat Dynamics Limited for their valuable
and inspiring guidance, suggestions and encouragement throughout the course of our learning
experience.
V. Shobha shobha.vissa20@gmail.com
Chapter 1: Introduction
1.1 Background
The project titled “Non-Uniformity correction (NUC) in Infrared Focal Plane Arrays
(IRFPA) – A Study & Implementation” is a major conception behind image enhancement and
modification in modern thermal imaging systems. The image captured by the IRFPA contains
non-uniformity which can be eliminated to a large extent using various non-uniformity
techniques.
The IRFPA is a part of electro-optical system which employs an array of lens at the front
end to capture images both during day and night and a high end DSP for signal processing at the
rear end. The individual pixels of IRFPA give different response under the same stimulus which
results in non-uniformity. This non-uniformity leads to a degraded image and can be corrected
using a black body calibrated at two known temperatures by non-uniformity correction
algorithms. This non-uniformity is also known as fixed pattern noise. Two-point NUC algorithm
has been implemented using the ADSP TS101 SHARC DSP simulator. The results of
implementation are shown in the later chapters.
The project aims at finding a solution to correct the non-uniformity present in the raw
image acquired by the thermal imaging system and thereby restores the actual image content
using source based techniques. The resultant image presents greater intensity and pixel clarity.
The concept of Image non-uniformity correction in IRFPA is employed in the defense
arenas like military, navy, air force for automatic target recognition (ATR) and Target tracking
applications wherein the external environment is not always suitable for capturing clear images
whenever required. Hence the acquired image is made clearer and suitable for further analysis by
using non-uniformity correction techniques. The other applications and markets for thermal
imaging technology include, stabilized thermal imaging cameras for law enforcement aircraft,
radiometry devices for use in monitoring industrial systems, and thermal imaging systems for use
in ground-based security and search and rescue.
The usage of IRFPA makes the image capturing possible even in the absence of light.
Hence this technology is extremely useful in the key areas of astronomy and space research.
1.2 Choice of the Digital Signal Processor
One of the biggest bottlenecks in executing DSP algorithms is transferring information to
and from memory. This includes data, such as samples from the input signal and the filter
coefficients, as well as program instructions, the binary codes that go into the program sequencer.
Figure 1.2 below shows how this seemingly simple task is done in a traditional
microprocessor. This is often called a Von Neumann architecture, after the brilliant American
mathematician John Von Neumann (1903-1957). A Von Neumann architecture contains a single
memory and a single bus for transferring data into and out of the central processing unit (CPU).
Multiplying two numbers requires at least three clock cycles, one to transfer each of the three
numbers over the bus from the memory to the CPU. The Von Neumann design is quite
satisfactory when you are content to execute all of the required tasks in serial. In fact, most
computers today are of the Von Neumann design. We only need other architectures when very
fast processing is required.
Fig 1.1: Von Neumann architecture
This leads us to the Harvard architecture, shown in Fig 1.3. Harvard architecture uses
separate memories for data and program instructions, with separate buses for each. Since the
buses operate independently, program instructions and data can be fetched at the same time,
improving the speed over the single bus design. Most present day DSPs use this dual bus
architecture.
Fig 1.2: Harvard architecture
Figure 1.4 illustrates the next level of sophistication, the Super Harvard Architecture.
This term was coined by Analog Devices to describe the internal operation of their ADSP-2106x
and new ADSP-211xx families of Digital Signal Processors. These are called SHARC® DSPs, a
contraction of the longer term, Super Harvard ARChitecture. The idea is to build upon the
Harvard architecture by adding features to improve the throughput. While the SHARC DSPs are
optimized in dozens of ways, two areas are important enough to be included in fig: an instruction
cache, and an I/O controller.
Fig 1.3: Super Harvard architecture
The SHARC DSPs provides both serial and parallel communications ports. These are
extremely high speed connections. Dedicated hardware allows these data streams to be
transferred directly into memory (Direct Memory Access, or DMA), without having to pass
through the CPU's registers. In other words, tasks like obtaining a sample or moving the output
happen independently and simultaneously with the other tasks; no cycles are stolen from the
CPU. The main buses (program memory bus and data memory bus) are also accessible from
outside the chip, providing an additional interface to off-chip memory and peripherals.
At the top of the diagram are two blocks labeled Data Address Generator (DAG), one for
each of the two memories. These control the addresses sent to the program and data memories,
specifying where the information is to be read from or written to. In simpler microprocessors this
task is handled as an inherent part of the program sequencer, and is quite transparent to the
programmer.
Fig 1.4: Typical DSP architecture (Analog Devices SHARC DSP)
The data register section of the CPU is used in the same way as in traditional
microprocessors. In the ADSP-2106x SHARC DSPs, there are 16 general purpose registers of 40
bits each.
The math processing is broken into three sections, a multiplier, an arithmetic logic unit
(ALU), and a barrel shifter. The multiplier takes the values from two registers, multiplies them,
and places the result into another register. The ALU performs addition, subtraction, absolute
value, logical operations (AND, OR, XOR, NOT), conversion between fixed and floating point
formats, and similar functions. Elementary binary operations are carried out by the barrel shifter,
such as shifting, rotating, extracting and depositing segments, and so on.
Chapter 2: TigerSHARC Processor (TS101S)
Since the NUC is a computationally intensive algorithm the SHARC processor viz.TS101
is used for the purpose. The processor boot mode that is used is EPROM booting which is
explained in the following pages. The IDDE used is VDSP++ 4.0. A cycle accurate simulator for
TS101 is used for implementation of NUC. An emulator is used for transferring the application
code to the EPROM.
The ADSP-TS101S TigerSHARC processor is the first member of the TigerSHARC
processor family. The ADSP-TS101S TigerSHARC processor is an ultrahigh performance, static
superscalar processor optimized for large signal processing tasks and communications
infrastructure. The DSP combines very wide memory widths with dual computation blocks
supporting 32- and 40-bit floating-point and 8-, 16-, 32-, and 64-bit fixed-point processing—to
set a new standard of performance for digital signal processors.
The block diagram (Fig 2.1) of the ADSP-TS101S TigerSHARC processor and some key
features include:
 Operating frequency of 300MHz and an instruction cycle time of 3.3ns
 19 mm × 19 mm (484-ball) or 27 mm × 27 mm (625-ball) PBGA (Plastic Ball Grid
Array) package
 Dual compute blocks, each consisting of an ALU, multiplier, 64-bit shifter, and 32-word
register file and associated data alignment buffers (DABs)
 Dual integer ALUs (IALUs), each with its own 31-word register file for data addressing
 A program sequencer with instruction alignment buffer (IAB), branch target buffer
(BTB), and interrupt controller
 Three 128-bit internal data buses, each connecting to one of three 2M bit memory banks
 On-chip SRAM (6Mbit)
 An external port that provides the interface to host processors, multiprocessing space
(DSPs), off-chip memory mapped peripherals, and external SRAM and SDRAM
 A 14-channel DMA controller
 Four link ports
 Two 64-bit interval timers and timer expired pin
 A 1149.1 IEEE compliant JTAG test access port for on-chip emulation
Fig 2.1: Functional Block Diagram of TS101S
2.1 General Description
The ADSP-TS101S provides a high performance Static Superscalar DSP operations,
optimized for telecommunications infrastructure and other large, demanding multiprocessor DSP
applications This architecture is superscalar in that the ADSP-TS101S processor’s core can
execute simultaneously from one to four 32-bit instructions encoded in a very large instruction
word (VLIW) instruction line using the DSP’s dual compute blocks.
In addition, the ADSP-TS101S supports SIMD operations two ways—SIMD compute
blocks and SIMD computations. The programmer can direct both compute blocks to operate on
the same data (broadcast distribution) or on different data (merged distribution). In addition, each
compute block can execute four 16-bit or eight 8-bit SIMD computations in parallel. Using its
Single-Instruction, Multiple- Data (SIMD) features, the ADSP-TS101S can perform 2.4 billion
40-bit MACs or 600 million 80-bit MACs per second.
Advantages of Ball Grid Array
 It occupies less space.
 It reduces the number of connections.
 Reliability increases.
 It is smaller, cheaper and lighter.
2.2 Memory Map
The memory map is divided into four memory areas—host space, external memory,
multiprocessor space, and internal memory—and each memory space, except host memory, is
subdivided into smaller memory spaces.
2.2.1 On-chip SRAM Memory —
The ADSP-TS101S has 6M bits of on-chip SRAM memory, divided into three blocks of
2M bits (64K words × 32 bits). Each block—M0, M1, and M2—can store program, data, or
both, so applications can configure memory to suit specific needs. Each internal memory block
connects to one of the 128-bit wide internal buses—block M0 to bus MD0, block M1 to bus
MD1, and block M2 to bus MD2—enabling the DSP to perform three memory transfers in the
same cycle. The DSP’s internal bus architecture provides a total memory bandwidth of 14.4G
bytes per second, enabling the core and I/O to access eight 32-bit data words (256 bits) and four
32-bit instructions each cycle.
2.2.2 External port (Off-Chip Memory/Peripherals Interface) —
The ADSP-TS101S processor’s external port provides the processor’s interface to off-
chip memory and peripherals.
Host Interface
The ADSP-TS101S provides an easy and configurable interface between its external bus
and host processors through the external port. The host can directly read or write the internal
memory
of the ADSP-TS101S, and it can access most of the DSP registers, including DMA control (TCB)
registers.
Multiprocessor Interface
The ADSP-TS101S offers powerful features tailored to multiprocessing DSP systems
through the external port and link ports. The external port supports a unified address space that
enables direct inter processor accesses of each ADSPTS101S processor’s internal memory and
registers. The DSP’s on-chip distributed bus arbitration logic provides simple, glue less
connection for systems containing up to eight ADSPTS101S processors and a host processor.
SDRAM Interface
The SDRAM interface provides a glue less interface with standard SDRAMs—16M bit,
64M bit, 128M bit, and 256M bit. The DSP directly supports a maximum of 64M words × 32 bits
of SDRAM. The SDRAM interface is mapped in external memory in the DSP’s unified memory
map.
EPROM Interface
The EPROM or flash memory interface is not mapped in the DSP’s unified memory map.
It is a byte address space limited to a maximum of 16M bytes (24 address bits). The EPROM or
flash memory interface can be used after boot via a DMA.
Fig 2.2: Memory Map
2.3 Booting
Booting is the process of loading the boot loader, initializing memory, and starting the
application on the target.
The Integrated Development and Debugging Environment (IDDE) provides support for
the creation of a bootable image. This image is comprised of a loader kernel and the user’s
application code. The IDDE includes loader kernels specific to each boot type. The boot loader
kernels are 256-word assembly source code routines that perform memory initialization on the
target.
The default boot loader kernels work in conjunction with the loader utility supplied with
IDDE tools. The loader utility takes the user’s TigerSHARC processor executable file along with
the boot loader kernel executable file and produces a bootable image file. The bootable image
file defines how the various blocks of TigerSHARC processor’s internal memory and optional
external system memory are to be initialized.
2.3.1 Selecting the Booting Mode
The two modes for booting are Master and Slave mode. Master boot accesses an EPROM
or FLASH device, and slave booting is initiated through the link port or through the external port
by a host (another TigerSHARC processor, for example). The state of the external BMS pin
determines the booting method. If the BMS pin is sampled low during reset, for example, it
results in an EPROM or FLASH device boot.
If the BMS pin is sampled high during reset, this causes the processor to go into idle.
When the processor is in the idle state waiting for a host or link boot, any signal from the host or
link causes a slave mode boot.
Regardless of which boot mode (master or slave) is used, each shares a common boot process.
The BMS pin determines the booting method.
 Each DMA channel from which the TigerSHARC processor can boot is automatically
configured for a 256-word (32-bit normal word) transfer.
 Those first 256 instructions, called the loader kernel, automatically execute and perform
additional DMAs to load the application executable code and data into internal and/or
external memory.
 Finally, the loader kernel overwrites itself with the application’s first 256 words.
Fig 2.3: Single Processor Configuration
2.3.2 EPROM/FLASH Device Boot
The EPROM boot is selected as default. The BMS pin is used as the strap option for the
selection—if the BMS pin is sampled low during reset, the mode is EPROM boot.
After reset in EPROM boot, DMA channel 0 is automatically configured to perform a
256-word block transfer from an 8-bit external boot EPROM, starting at address 0 to internal
memory, locations 0x00-0xFF. The DMA channel 0 interrupt vector is initialized to internal
memory address 0x0. An interrupt occurs at the completion of the DMA channel 0 transfer and
the TigerSHARC processor starts executing the boot loader kernel at internal memory location
0x0.
Fig 2.4: PROM Booting
The boot loader kernel then brings in the application code and data through a series of
single-word DMA transfers. Finally, the boot loader kernel overwrites itself with the application
code, leaving no trace of itself in TigerSHARC processor internal memory. When this DMA
process completes, the IVT entry of DMA channel 0 points to internal memory address 0,
allowing the user’s application code to begin execution.
2.3.3 Host Boot
Booting the TigerSHARC processor from a 32-bit or 64-bit host processor is performed
via the data and address buses of the external port.
The BMS pin is used as the strap option for the selection. If the BMS pin is sampled high
during reset, this causes the processor to go into idle and disables master mode boot DMA. When
the processor is in idle state waiting for a host or link boot, any signal from the host or link
causes a slave mode boot.
Host boot uses the TigerSHARC processor Auto DMA channels. Either Auto DMA
channel can be used since both Auto DMA channels (AUTODMA0 and AUTODMA1) are active
and initialized at reset to transfer 256 words of code and/or data into the TigerSHARC
processor's internal memory block 0, locations 0x00-0xff. The corresponding DMA interrupt
vectors are initialized to 0. An interrupt occurs at the completion of the DMA transfer and the
TigerSHARC processor starts executing the boot loader kernel at internal memory location 0x0.
It is intended that these first 256 words act as a boot loader to initialize the rest of TigerSHARC
processor internal memory. The boot loader kernel then brings in the application code and data
through a series of single-word DMA transfers. Finally, the boot loader kernel overwrites itself
with the application code, leaving no trace of itself in TigerSHARC processor internal memory.
When this series of DMA processes completes, the IVT entry of Auto DMA channel 0 (and Auto
DMA channel 1) points to internal memory address 0, allowing the user’s application code to
begin execution.
2.3.4 Link Port Boot
Any link port can be used for booting, since all link ports are active and waiting to
receive data upon power up reset or after a hard reset. Link port boot uses TigerSHARC
processor's link port DMA channels. All link port DMAs are initialized to transfer 256 words to
TigerSHARC processor's internal memory block 0, locations 0x00-0xFF. An interrupt occurs at
the completion of the DMA transfer and the TigerSHARC processor starts executing the boot
loader kernel at internal memory location 0x0. It is intended that these first 256 words act as a
boot loader to initialize the rest of TigerSHARC processor internal memory. The boot loader
kernel then brings in the application code and data through a series of single-word DMA
transfers. Finally, the boot loader kernel overwrites itself with the application code, leaving no
trace of itself in TigerSHARC processor internal memory. When this series of DMA processes
completes, the IVT entry of the link port DMA channel points to internal memory address 0,
allowing the user’s application code to begin execution.
2.3.5 No Boot Mode
Starting the processor in no boot is a master boot mode—a boot mode in which the
TigerSHARC processor starts and controls the external data fetch process. In no boot mode, the
TigerSHARC processor starts from an IRQ vector (externally or internally) fetching data.
When a host or link boot mode is selected, the ADSP-TS101 processor enters an idle state
after reset, waiting for the host or link port to boot it. It does not have to be booted by the host or
a link port. If external interrupts IRQ3–0 are enabled (selected at reset by the IRQEN strap pin),
they can be used to force code execution according to the default interrupt vectors.
Fig 2.5: IRQ3-0 Interrupt Vector Table
2.4 VisualDSP++ (Integrated Development & Debugging Environment)
VisualDSP++ is an integrated development and debugging environment (IDDE) to the
Analog Devices development tools suite for processors.
The VisualDSP++ single, integrated project management and debugging environment
provides complete graphical control of the edit, build, and debug process. As an integrated
environment, you can move easily between editing, building, and debugging activities.
VisualDSP++ provides these features:
Extensive editing capabilities. Create and modify source files by using multiple language syntax
highlighting, drag-and-drop, bookmarks, and other standard editing operations. View files
generated by the code development tools.
 Flexible project management. Specify a project definition that identifies the files,
dependencies, and tools that you use to build projects. Create this project definition once
or modify it to meet changing development needs.
 Easy access to code development tools. Analog Devices provides these code development
tools: C/C++ compiler, assembler, linker, splitter, and loader. Specify options for these
tools by using dialog boxes instead of complicated command-line scripts. Options that
control how the tools process inputs and generate outputs have a one-to-one
correspondence to command-line switches. Define options for a single file or for an entire
project. Define these options once or modify them as necessary.
 Flexible project build options. Control builds at the file or project level. VisualDSP++
enables you to build files or projects selectively, update project dependencies, or
incrementally build only the files that have changed since the previous build. View the
status of your project build in progress. If the build reports an error, double-click on the
file name in the error message to open that source file. Then correct the error, rebuild the
file or project, and start a debug session.
 VisualDSP++ Kernel (VDK) support. Add VDK support to a project to structure and
scale application development. The Kernel page of the Project window enables you to
manipulate events, event bits, priorities, semaphores, and thread types.
 Flexible workspace management. Create up to ten workspaces and quickly switch
between them. Assigning a different project to each workspace enables you to build and
debug multiple projects in a single session.
 Easy movement between debug and build activities. Start the debug session and move
freely between editing, build, and debug activities.
Fig 2.6: The VDSP++ Environment
VisualDSP++ reduces debugging time by providing these key features:
Easy-to-use debugging activities. Debug with one common, easy-to-use interface for all
processor simulators and emulators, or hardware evaluation and development boards. Switch
easily between these targets.
 Multiple language support. Debug programs written in C, C++, or assembly, and view
your program in machine code. For programs written in C/C++, you can view the source
in C/C++ or mixed C/C++ and assembly, and display the values of local variables or
evaluate expressions (global and local) based on the current context.
 Effective debug control. Set breakpoints on symbols and addresses and then step through
the program’s execution to find problems in coding logic. Set watch points (conditional
breakpoints) on registers, stacks, and memory locations to identify when they are
accessed.
 Tools for improving performance. Use the trace, profile, and linear and statistical profiles
to identify bottlenecks in your DSP application and to identify program optimization
needs. Use plotting to view data arrays graphically. Generate interrupts, outputs, and
inputs to simulate real-world application conditions.
2.4.1 Code Development tools
Code development tools include:
C/C++ compiler
 Run-time library with over 100 math, DSP, and C run-time library routines
 Assembler
 Linker
 Splitter
 Loader
 Simulator
 Emulator
These tools enable you to develop applications that take full advantage of your processor’s
architecture. The VisualDSP++ linker supports multiprocessing, shared memory, and memory
overlays.
The code development tools provide these key features:
Easy-to-program C, C++, and assembly languages. Program in C/C++, assembly, or a mix of
C/C++ and assembly in one source. The assembly language is based on an algebraic syntax that
is easy to learn, program, and debug.
 Flexible system definition. Define multiple types of executables for a single type of
processor in one Linker Description File (.LDF). Specify input files, including objects,
libraries, shared memory files, overlay files, and executables.
 Support for overlays, multiprocessors, and shared memory executables. The linker places
code and resolves symbols in multiprocessor memory space for use by multiprocessor
systems. The loader enables you to configure multiple processors with less code and
faster boot time. Create host, link port, and PROM boot images.
2.4.2 Parts of User Interface
VisualDSP++ is an intuitive user interface for programming Analog Device Processors.
When the VisualDSP++ icon is clicked, the main window appears. This work area contains
everything you need to build, manage and debug a project.
Within the main application window frame, VisualDSP++ provides:
 Title bar
 Menu bar
 Project window
 Editor window
 Control menu
 Output window
 Toolbars
 Status bar
 Expressions [Hexadecimal] window
 Disassembly window
Fig 2.7: The VDSP++ Environment icon
VisualDSP++ provides many debugging windows to view what’s going on. The
programmer needs to learn only one interface to debug all the DSP applications. VisualDSP++
supports ELF/DWARF-2 (Executable Linkable Format) executable files and also all executable
file formats produced by the Linker.
2.4.3 Create a New Project
To create a new project,
1. From the File menu, choose New and then Project to open the Project Wizard.
2. Click the browse button to the right of the ‘Directory field’ to open the Browse for Folder
dialog box to select the directory in which the project could be stored.
3. In the ‘Project name’ field, enter the project’s name and click ‘Next’.
4. In the Project: Output type window, choose the processor type. For example, in this case
we use ADSP-TS101 TigerSHARC processor. Then click ‘Next’.
5. Clicking ‘Finish’ in Finish window to create the project.
Fig 2.8: Step 4 in creating a project; select the processor type
2.4.4 Add a Source Files to the Project
To any file to the source folder of the project for compilation,
1. Right click on the ‘Source Files’ in the Project window on the left side.
2. Select ‘Add File(s) to Folder’ option in it which opens the Add Files window.
3. Select the required Source file from the Add files window and click ‘Open’ which adds
the corresponding file to the Source folder.
Fig 2.9: Step 1 to Add a Source File
2.4.5 Build and Run the Program
To build the project, select the Project menu choose ‘Build project’ or press F7 directly.
Fig 2.10: To build a project
To run the program, select the Debug menu and choose ‘Run’ or press F5 directly.
Fig 2.11: To run a program
2.4.6 Open a Plot Window
To open a plot window:
1. From the View menu, choose Debug Windows and Plot. Then choose New to open the
Plot Configuration dialog box.
2. In the Plot group box, specify the following values.
3. In the Type box, select type of plot from the drop-down list.
a. In the Title box, type the title
4. Enter two data sets to plot by using the values. After entering each data set, click Add to
add the data set to the Data sets list on the left of the dialog box.
5. Click OK to apply the changes and to open a plot window with these data sets.
6. Right-click in the plot window and choose Modify Settings. On the General page, in the
Options group box, select Legend and click OK to display the legend box.
Fig 2.12: Plot Configuration dialog box
Fig 2.13: Plot settings dialog box
Chapter 3: Infrared Focal Plane Array
An Infrared focal plane array, also known as staring array, is an array of detectors aligned
at the focal plane of the imaging system. Every detector in the IRFPA can have different response
under the same stimulus, known as non-uniformity. This non-uniformity leads to the presence of
a Fixed pattern noise (FPN) in the resulting images, degrading the resolving capabilities of the
thermal imaging system. The most common sources of FPN are inaccuracies in the fabrication
process, variations in the read-out electronics and a decrease in the signal intensity at the edges
of the image caused by the sensor optics.
Infrared (IR) thermal imagers, also known as infrared focal plane arrays (IRFPA), have
been used in military applications for many years. An IR thermal imager is a camera that
provides a picture of the electromagnetic energy radiated from an object in the IR spectral band.
A number of detector technologies have been designed and optimized for imaging in the IR
spectral band, each posing unique design challenges. Most modern IR thermal imagers used for
detection in iR spectral bands are based on focal plane arrays (FPA).
3.1 Basic terms/Definitions
The basic terms related to FPA are discussed below:
3.1.1 Integration Time:
Integration time is the amount of time the system is to be exposed to the radiation. Every detector
has a varied integration time and is measured in micro seconds.
3.1.2 Responsivity of detector:
The Responsivity of the detector is defined as the ratio of output to input. This is one of the
important features that determine the performance of detector.
3.1.3 Pixel:
Pixel is the shortcut used for the picture element. A pixel is a single point in a graphic image.
Graphics monitors display pictures by dividing the display screen into thousands (or millions) of
pixels arranged in rows and columns. The pixels are so close together that they appear connected.
3.1.4 Resolution:
The quality of display system largely depends on its resolution, how many pixels it can display
and how many bits are used to represent each pixel.
3.1.5 Read Out Integrated Circuit (ROIC):
ROIC is an integrated circuit specifically used for reading detectors of a particular type. They are
compatible with different types of detectors such as infrared and ultraviolet.
3.1.6 Hybrid and Monolithic sensors:
Monolithic sensors are the sensors in which both sensing and non-sensing areas are present on
the same level whereas hybrid sensors are the sensors in which both sensing and non-sensing
areas are present on different levels. Hybrid sensors have more pixels and more resolution
compared to the monolithic sensors.
3.1.7 Fill factor of FPA:
It is defined as the ratio of sensing area to non-sensing area. It is expressed in percentage.
Sensing area is defined as the area where the pixels are present and non-sensing area is the area
where the connections are made.
3.2 Non-Uniformity in IRFPA
To characterize the performance of an IRFPA, the FPA specific parameters such as
detector-to-detector uniformity (non-uniformity) and dead pixel count should be assessed. FPAs
are made up of a multitude of detector elements, where each individual detector has different
gain and offset that change with time, due to detector-to-detector variability in the FPA
fabrication process, sensor operating temperature, temperature of the observed scene, electronic
read-out noise, etc. The difference in gain and offset among detectors produce fixed-pattern noise
(FPN) in the acquired imagery.
Causes of Non-uniformity
 Lack of control over fabrication process
 Signal variation at the edges of the lens systems
 Slight variation in cooling also results in non-uniformity
 Variations in ambient or scene temperature
Furthermore, this spatial non-uniformity fluctuates slowly with time due to variations in the
FPA temperature, bias voltages and change in ambient or scene temperature. The goal of non-
uniformity correction is to reduce the magnitude of the spatial noise below the temporal noise
level.
3.3 Types of Non-Uniformity Correction (NUC) techniques
There are mainly two types of NUC techniques:
1. Calibration/Source based techniques
2. Scene based techniques
3.3.1 Calibration/Source based techniques
In the calibration method, FPA is calibrated at certain reference temperature using black body
data. There are two types of source based NUC techniques:
1. Two point NUC
2. Three point NUC
The commonly used techniques is two point calibration method in which FPA is calibrated at
two known temperatures using black body data. The gain and offset of the detector output are
calibrated across the array so that FPA produces uniform and radio metrically accurate output at
these two reference temperatures. The reference temperatures T1 and T2 should be selected such
that they should not be close enough which limits the operating temperature range. In two point
NUC the expected pixel output is assumed to follow a straight line equation within the
temperature range.
In three point NUC, the expected pixel output is assumed to follow a quadratic law within the
temperature range. FPA is calibrated at three distinct reference temperatures using black body
data.
3.3.2 Scene based techniques
Scene based techniques generally use an image sequence and rely on the scene
parameters like motion or change in the actual scene, but do not provide the required radio
metric accuracy and are also difficult to implement in real time applications.
3.4 Block Diagram of Test Setup
Power Supply (+28V)
Black body controller
PC
Test Board
Black body
Electro-optic (IR) sensor
Power Supply (+28V)
Cooling system
Fig 3.1: Block diagram of Test Setup
3.4.1 Black body
There are two types of black bodies:
 Extended area black body
 Point black body
In extended area black body the entire surface will emit radiation. Calibrate and accurate
values are noted. A black body controller is used to set up various temperatures for the black
body. Both absolute and different temperature can be setup.
3.4.2 IR Sensor
This is a photovoltaic sensor sensitive to infrared radiation in the LWIR region viz., 8-14
micro meters. This is a cooling sensor with a Joule-Thompson cooler.
3.4.3 Cooling system
The main principle followed in the cooling system is “cooling by expansion” and the
technique used is JT cooler. The temperature to which the detectors are cooled is 77K (or -196
degree centigrade). The hot air is passed through a piston and is sent to a closed chamber where
air expands and thus the cooling procedure progresses. The system consists of pressurized air. A
thin coil is used to send the hot air through the piston. The detectors are thus cooled using this
technique. The resolution can be increased if we consider a cooling system instead of an un-
cooled system as the un-cooled system results in less resolution. The un-cooled system provides
a less clarity image and the image which is cooled is clearer when compared with this.
3.4.4 Test Board
Test board is used to transmit commands and receive signals from the DSP using a serial
link. It is used for synthesis and analysis of DSP signals. The video monitor present provides a
video output and is displayed. The DC regulated power supplies are set at their respective
voltages for the EOS system.
Chapter 4: Two Point Non-Uniformity Correction
To improve the image formed by the electro-optic sensor, the two point calibration
technique is used, wherein the FPA is calibrated at two known temperatures using black body
data. The gain and offset of the FPA are calibrated across the array so that it produces a uniform
and radio metrically accurate output at these two reference temperatures. But this method
requires halting of the camera operation and results in large residual non-uniformities (RNU)
away from the calibration.
4.1 Calibration (or) Reference temperatures
The temperatures at which the black body is maintained for the purpose of data
acquisition are called as reference or calibration temperatures.
The selection of the calibration temperature is also very important. The key factors to be
considered in the process of selection are:
 The reference temperatures T1 and T2 should be selected such that they should not be
close enough which ultimately limits the operating temperature range.
 The operating temperature range should not be large enough to give more residual non-
uniformities (RNU) in the operating temperature range.
The performance of the algorithm degrades as the temperature range increases i.e. the two
point calibration scheme shows poor performance for large temperature ranges.
The temperatures T1 and T2 at which the black body data is captured in our project are 20
and 36 degrees centigrade respectively. The operating temperatures of the entire system setup is
the room temperature i.e. 27 degrees centigrade.
4.2 Sensor Non-Uniformity
The detection of the infrared radiation is done by the Infrared sensor present in the EO
sensor. The output of the sensor is given as input to the Analog to Digital Converter. The Infrared
Focal Plane Array (IRFPA) sensor response is generally modeled as a first order linear
relationship between the input irradiance and the detector output
The output response Xij of the pixel (i,j) is
Where aij, bij are the gain and offset non uniformities associated with (i,j)th pixel and xij is the
irradiance received by the (i,j)th pixel.
4.3 Graphical Representation of Non-Uniformity Correction (NUC)
The figure 4.1 below show two different pixel outputs with change in input irradiance, T1
and T2 are calibration temperatures which are used to calculate the gain and offset coefficients.
The inequalities in those mathematical parameters are due to the presence of non-uniformity in
the image. Hence, the non-uniformity correction technique is implemented in order to
compensate these two coefficients and obtain a uniform image.
A
Detector Output
B
0 T1 T2 Input irradiance
Fig 4.1: Two pixels output with different gain and offset
The figure shows the output response of the detector with the change in their input
irradiance. T1 and T2 are the calibration temperatures which are used to calculate the gain and
offset coefficients.
In the figure 4.1 above, the slopes and the Y-intercepts of the detector response plot of
both the pixels are unequal. That means, neither the gain nor the offset values of both the pixels
have been compensated.
Detector Output
A
B
0 T1 T2 Input irradiance
Fig 4.2: Two pixels output with gain compensated
In the fig 4.2 above, the slopes of both the pixels are same, which means the gain of the
pixels is made equal by using the NUC algorithm. But the offset i.e., the Y-intercept is different
for both the pixels and remains uncompensated.
A
Detector Output
B
0 T1 T2 Input irradiance
Fig 4.3: Two pixels output with offset compensated
In the figure 4.3 above, the Y-intercepts of both the pixels is same, which means the
offset values i.e., the Y-intercepts of both the pixels are made by using the NUC algorithm. But
the gain i.e., the slopes is different for both the pixels and remains uncompensated.
Detector Output
A & B
0 T1 T2 Input irradiance
Fig 4.4: Two pixels output with gain and offset compensated
In the figure above, the straight lines A & B i.e., the detector output for both the pixels is
same and hence they overlap each other. Here, the Y-intercepts and slopes of both the pixels is
same and hence they overlap each other. Here, the Y-intercepts and slopes of both the pixels are
same, which means the offset values and the gain of both the pixels is made equal by using the
NUC algorithm. Hence, this graph shows the corrected pixel output with the gain and offset both
compensated.
In this way the gain and offset values are compensated and the resultant image has a
higher resolution with better uniformity.
4.4 Two point Non-uniformity Correction (NUC) Technique
Each pixel in the FPA is characterized by its offset level, its sensitivity or gain and its
noise level. If an extended black body is viewed by an IR thermal imager, the levels measured by
the individual pixels should all be close to the average level measured.
Image non-uniformity correction in EO sensor is performed using two point NUC technique.
In two point NUC, the measured signal input Yij is given by the following linear relationship.
where aij is the gain and bij is offset.
Solving for the above equation at two reference temperatures gives aij and bij as given
below. Using these values, the actual values o the offset and gain are calculated and the
corresponding graphs are drawn.
aij=
bij=
Vlij, Vhij are the (i,j)th pixel intensities and Vl, Vh are the spatial averages of the image frames
at lower and higher reference temperatures respectively and are given as
Vl=
Vh=
where N=m*n is the total number of pixels in a frame and m and n are the number of rows and
columns respectively. Now, using these equations the corrected pixel output Yij can be
calculated.
The equation used to calculate the corrected pixel output is
4.5 Image Debugging and Analysis Software
This is the software that is used for capturing the black body image data i.e., the original
uncorrected pixel intensity values. The figure below shows the working environment and the
auto contrast uncorrected image of the black body.
The array size of the FPA can be selected from the Select menu. Here, it is chosen to be a
2x2 array. The array is placed at the centre of the image taken.
Fig 4.5: Image Debugging and Analysis Software Environment (Selecting the array size)
The image data at a particular pixel is obtained in the decimal format by using this software. The
figure below shows how the decimal value of each pixel is obtained by using this software.
Fig 4.6: Image Debugging and Analysis Software Environment (Showing pixel intensities in decimal)
4.6 Operation of Test Board
Flow chart depicting the operational flow
Start
Align the black body surface such that it faces the EO sensor
Cool the IRFPA detector using cooling system
Power up the EO sensor
Set the black body to a low temperature
Did the black body reach the desired temperature?
NO
Wait until black body reaches the desired temperature
YES
Capture the black body data using a PC
Set the black body to a higher temperature
Continue the same procedure for different black body temperatures
Stop
4.7 Software Implementation
Flow chart is drawn and software code is written for the two point non-uniformity correction
(NUC) of the raw image.
The flowchart for the software implementation is shown below:
Start
Open the image data in image acquisition system
Select gate size (2x2)
Place gate at location (row,column)=(60,60)
Open the file containing grey levels
Convert hexadecimal grey levels to decimal
Populate the input temperature array at which data is being captured
Populate the higher temperature array
NO
Is array fully populated?
YES
A
A
Populate the lower temperature array
NO
Is array fully populated?
YES
B
Populate the uncorrected array at the first temperature
Calculate high temperature array mean
Calculate low temperature array mean
Calculate corrected pixel output values using 2-point NUC correction formula
Calculate mean of corrected pixel output
Calculate error e=Yij-Y
Calculate sum of squares (SOS) for error samples
Calculate SOS/N
Compute square root of above
C
C
Store Y(mean) and RMSE samples in respective arrays at first temperature
All temperature data entered?
B
Print temperature samples, corrected output samples and RMSE samples
Plot graph for temperature vs. corrected output samples and temperature vs. RMSE
Repeat for other integration times
Plot graph for all the integration times
Stop
Chapter 5: Observations and Results
The observations are taken from the graphs plotted which are obtained from
implementation of the source code. The following graphs are plotted.
5.1 Temperature vs. Corrected pixel output
Fig 5.1: Corrected pixel values for different integration times at different temperatures (Expressions window)
The above figure shows the average of corrected pixel values for different integration
times at different temperatures obtained using the VDSP++ (IDDE) simulator after implementing
the source code for two point non-uniformity correction. They are displayed through the
Expressions [Hexadecimal] window inside the VDSP++ environment. These values are plotted
against the temperature to observe the nature of the graph.
Fig 5.2: Temperature vs. corrected pixel output
This graph shows the values of the averages of the corrected pixel outputs for
temperatures between 16 and 40 degrees centigrade. It is observed that the graphs for all the
three integration times is parallel to each other.
5.2 Temperature vs. RMSE
Fig 5.3: RMSE values for different integration times at different temperatures (Expressions window)
The above figure shows the Root Mean Square Error (RMSE) values for different integration
times at different temperatures obtained using the VDSP++ (IDDE) simulator after implementing
the source code for two point non-uniformity correction. They are displayed through the
Expressions [Hexadecimal] window inside the VDSP++ environment. It can be observed that the
RMSE values for the calibration temperatures are zero in the arrays.
Fig 5.4: Temperature vs. RMSE
This graph shows the root mean square error (RMSE) values at different temperatures for
the three integration times. Here the reference temperatures have been taken as 20 and 36
degrees centigrade. We can observe from the above graph that the root mean square error at
these two temperatures is zero for every integration time.
5.3 Real time Examples
Fig 5.5: Real time uncorrected (or) raw image
The above image is the real time image of some rocks and mountains that is captured by
the IRFPA during the night. It is the raw image captured by the system. The two point NUC
algorithm is implemented for its correction.
Fig 5.6: Uniform image is obtained after performing 2 point NUC algorithm
The figure above is the corrected (or) uniform real time image of the rocks and
mountains. As it is observed, this picture is an enhanced and a clearer version. Another
example can be seen in the figure below.
Fig 5.7: Example of two point NUC implementation
Chapter 6: Conclusion & Future Scope
6.1 Conclusion
The uncorrected pixel intensity values have been acquired through Electro-optic sensors.
These values correspond to the non-uniform image. As the non-uniformity cannot be eliminated
using hardware, software code has been developed to correct the non-uniformity present in the
image. This software code is developed on the principle of two point correction.
The uncorrected pixel intensity values of the black body were noted at three different
integration times for seven different temperatures. The two point non-uniformity correction
algorithm was applied to the raw data at all the seven temperatures to get the corrected data.
Root mean square error was also calculated at all the temperatures and it was observed to
be zero at the calibration temperatures and has an infinite value at other points away from the
calibration temperatures; which indicate that the non-uniformity is completely eliminated at
calibration temperatures and reduced at other temperatures.
Slopes of detector output at different integration times are equal, indicating gain and
offset values to be corrected and the resulting image is a NUC image with greater clarity and
pixel intensities.
6.2 Future Scope
In this project, the Two point calibration NUC has been used. But for a greater accuracy
and enhancing the image further, the Three point calibration NUC could be used which is an
advanced version. Here three calibration temperatures are chosen for reference in the non-
uniformity correction.
In future, this project can also be extended and modified by using a Field Programmable
Gate Array (FPGA) in the place of TigerSHARC digital signal processor in order to further
improve the processing capabilities.
SOURCE CODE
The following code has been written in C language in the VDSP++ Integrated Development &
Debugging Environment (ID & DE).
#PROGRAM FOR NON-UNIFORMITY CORRECTION
#include<defts101.h>
#include<stdio.h>
#include <math.h>
#define N 4
int IT;
int temp[7]={16,20,24,28,32,36,40};
float Xij_it400[4]; //Uncorrected input (diff int. times)
float Xij_it500[4];
float Xij_it600[4];
float rmse_it400[7]; //RMSE for diff. integration times
float rmse_it500[7];
float rmse_it600[7];
int k;
float Yij[4]; //Corrected output
float Ym_it400[7]; //Corrected output avg.
float Ym_it500[7];
float Ym_it600[7];
float Vlij[4]; //reference temp. inputs
float Vhij[4];
void NUC(float X[],float Vl,float Vh)
{
int i;
float y=0,Y;
float Y_err[4],err_sqr[4],err=0;
float sqr_avg=0,rms_err=0;
for(i=0;i<4;i++)
{
Yij[i]=( ( (X[i]-Vlij[i])*(Vh-Vl) ) / (Vhij[i]-Vlij[i]) )+Vl;
printf("%f n ",Yij[i]);
y=y+Yij[i];
}
Y=y/4;
printf("%f",Y);
if(IT==400)
Ym_it400[k]=Y;
else if(IT==500)
Ym_it500[k]=Y;
else if(IT==600)
Ym_it600[k]=Y;
for(i=0;i<4;i++)
{
Y_err[i]=Y-Yij[i];
err_sqr[i]=Y_err[i]*Y_err[i];
err=err+err_sqr[i];
}
sqr_avg=err/4;
printf("%f",sqr_avg);
rms_err=sqrt(sqr_avg);
if(IT==400)
rmse_it400[k]=rms_err;
else if(IT==500)
rmse_it500[k]=rms_err;
else if(IT==600)
rmse_it600[k]=rms_err;
printf("RMSE=%f",rms_err);
}
void main()
{
int m,p, Vtl,Vth;
float l=0,h=0;
float Vl, Vh;
printf("Enter lower reference temperature= n ");
scanf("%i",&Vtl)
printf("Enter higher reference temperature= n ");
scanf("%i",&Vth);
while(1)
{
printf("Enter integration time (ms)= n ");
scanf("%d",&IT);
if(IT==400)
{
k=0, l=0,h=0;
printf("Enter uncorrected input at lower ref. temp.=");
for(m=0;m<4;m++)
{
scanf("%f",&Vlij[m]);
l=l+Vlij[m];
}
Vl=l/4;
printf("n %f n ",Vl);
printf("Enter uncorrected input at higher ref. temp.= n ");
for(m=0;m<4;m++)
{
scanf("%f",&Vhij[m]);
h=h+Vhij[m];
}
Vh=h/4;
printf("n %f n ",Vh);
for(p=0;p<7;p++)
{
printf("Enter uncorrected input at %d degrees= n ",temp[p]);
for(m=0;m<4;m++)
{
scanf("%f",&Xij_it400[m]);
}
NUC(Xij_it400,Vl,Vh);
k++;
}
}
else if(IT==500)
{
k=0;
l=0,h=0;
printf("Enter uncorrected input at lower ref. temp.=");
for(m=0;m<4;m++)
{
scanf("%f",&Vlij[m]);
l=l+Vlij[m];
}
Vl=l/4;
printf("n %f n ",Vl);
printf("Enter uncorrected input at higher ref. temp.=");
for(m=0;m<4;m++)
{
scanf("%f",&Vhij[m]);
h=h+Vhij[m];
}
Vh=h/4;
printf("n %f n ",Vh);
for(p=0;p<7;p++)
{
printf("Enter uncorrected input at %d degrees=",temp[p]);
for(m=0;m<4;m++)
{
scanf("%f",&Xij_it500[m]);
}
NUC(Xij_it500,Vl,Vh);
k++;
}
}
else if(IT==600)
{
k=0;
l=0,h=0;
printf("Enter uncorrected input at lower ref. temp.=");
for(m=0;m<4;m++)
{
scanf("%f",&Vlij[m]);
l=l+Vlij[m];
}
Vl=l/4;
printf("n %f n ",Vl);
printf("Enter uncorrected input at higher ref. temp.=");
for(m=0;m<4;m++)
{
scanf("%f",&Vhij[m]);
h=h+Vhij[m];
}
Vh=h/4;
printf("n %f n ",Vh);
for(p=0;p<7;p++)
{
printf("Enter uncorrected input at %d degrees=",temp[p]);
for(m=0;m<4;m++)
{
scanf("%f",&Xij_it600[m]);
}
NUC(Xij_it600,Vl,Vh);
k++;
}
}
else
printf("NUC FOR ALL THE INTEGRATION TIMES IS
COMPLETED");
break;
}
}
Appendix
References
[1]. Scientists and Engineers Guide for Digital signal Processing. ‘Chapter 1 and Chapter 28’
Description of the Architectures involved in designing the TigerSHARC Processor.
[2].http://www.analog.com. The TigerSHARC processor data is obtained from this website along
the data sheets.
[3].http://datasheets360.com. This website has provided various data sheets requires for
components used in the project.

More Related Content

What's hot

Using Many-Core Processors to Improve the Performance of Space Computing Plat...
Using Many-Core Processors to Improve the Performance of Space Computing Plat...Using Many-Core Processors to Improve the Performance of Space Computing Plat...
Using Many-Core Processors to Improve the Performance of Space Computing Plat...Fisnik Kraja
 
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...Raj Kumar Thenua
 
Median based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructionMedian based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructioncsandit
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcsandit
 
A Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosA Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosINFOGAIN PUBLICATION
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
DIGITAL SIGNAL PROCESSOR OVERVIEW
DIGITAL SIGNAL PROCESSOR OVERVIEWDIGITAL SIGNAL PROCESSOR OVERVIEW
DIGITAL SIGNAL PROCESSOR OVERVIEWsathish sak
 
1 introduction to dsp processor 20140919
1 introduction to dsp processor 201409191 introduction to dsp processor 20140919
1 introduction to dsp processor 20140919Hans Kuo
 
Automatic tempest test and analysis system
Automatic tempest test and analysis systemAutomatic tempest test and analysis system
Automatic tempest test and analysis systemijcisjournal
 
Ee6403 --unit v -digital signal processors
Ee6403 --unit v -digital signal processorsEe6403 --unit v -digital signal processors
Ee6403 --unit v -digital signal processorsJeya Bright
 
Video compressiontechniques&standards lamamahmoud_report#2
Video compressiontechniques&standards lamamahmoud_report#2Video compressiontechniques&standards lamamahmoud_report#2
Video compressiontechniques&standards lamamahmoud_report#2engLamaMahmoud
 
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...Ilango Jeyasubramanian
 

What's hot (18)

Using Many-Core Processors to Improve the Performance of Space Computing Plat...
Using Many-Core Processors to Improve the Performance of Space Computing Plat...Using Many-Core Processors to Improve the Performance of Space Computing Plat...
Using Many-Core Processors to Improve the Performance of Space Computing Plat...
 
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
 
Median based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstructionMedian based parallel steering kernel regression for image reconstruction
Median based parallel steering kernel regression for image reconstruction
 
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONMEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTION
 
A Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System VideosA Novel Approach for Compressing Surveillance System Videos
A Novel Approach for Compressing Surveillance System Videos
 
DSP Processor
DSP Processor DSP Processor
DSP Processor
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
Convolution
ConvolutionConvolution
Convolution
 
IMQA Poster
IMQA PosterIMQA Poster
IMQA Poster
 
DIGITAL SIGNAL PROCESSOR OVERVIEW
DIGITAL SIGNAL PROCESSOR OVERVIEWDIGITAL SIGNAL PROCESSOR OVERVIEW
DIGITAL SIGNAL PROCESSOR OVERVIEW
 
Ch7 031102
Ch7 031102Ch7 031102
Ch7 031102
 
Fixed-point Multi-Core DSP Platform
Fixed-point Multi-Core DSP PlatformFixed-point Multi-Core DSP Platform
Fixed-point Multi-Core DSP Platform
 
1 introduction to dsp processor 20140919
1 introduction to dsp processor 201409191 introduction to dsp processor 20140919
1 introduction to dsp processor 20140919
 
Automatic tempest test and analysis system
Automatic tempest test and analysis systemAutomatic tempest test and analysis system
Automatic tempest test and analysis system
 
Ee6403 --unit v -digital signal processors
Ee6403 --unit v -digital signal processorsEe6403 --unit v -digital signal processors
Ee6403 --unit v -digital signal processors
 
Video compressiontechniques&standards lamamahmoud_report#2
Video compressiontechniques&standards lamamahmoud_report#2Video compressiontechniques&standards lamamahmoud_report#2
Video compressiontechniques&standards lamamahmoud_report#2
 
Digital signal processor part4
Digital signal processor part4Digital signal processor part4
Digital signal processor part4
 
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...
 

Similar to BDL_project_report

Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Gaurav Raina
 
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon PhiDeep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon PhiGaurav Raina
 
Deep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon PhiDeep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon PhiGaurav Raina
 
D0364017024
D0364017024D0364017024
D0364017024theijes
 
Engineering Portfolio of Isaac Bettendorf
Engineering Portfolio of Isaac BettendorfEngineering Portfolio of Isaac Bettendorf
Engineering Portfolio of Isaac BettendorfIsaac Bettendorf
 
Technical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_ProjectsTechnical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_ProjectsEmmanuel Chidinma
 
The Principle Of Ultrasound Imaging System
The Principle Of Ultrasound Imaging SystemThe Principle Of Ultrasound Imaging System
The Principle Of Ultrasound Imaging SystemMelissa Luster
 
Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...IAEME Publication
 
Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...IAEME Publication
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002Enrico Busto
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002Enrico Busto
 
Multi Processor Architecture for image processing
Multi Processor Architecture for image processingMulti Processor Architecture for image processing
Multi Processor Architecture for image processingideas2ignite
 
Cloud Graphical Rendering: A New Paradigm
Cloud Graphical Rendering:  A New ParadigmCloud Graphical Rendering:  A New Paradigm
Cloud Graphical Rendering: A New ParadigmJoel Isaacson
 
Hairong Qi V Swaminathan
Hairong Qi V SwaminathanHairong Qi V Swaminathan
Hairong Qi V SwaminathanFNian
 
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSFACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
 

Similar to BDL_project_report (20)

Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2Thesis Report - Gaurav Raina MSc ES - v2
Thesis Report - Gaurav Raina MSc ES - v2
 
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon PhiDeep Convolutional Neural Network acceleration on the Intel Xeon Phi
Deep Convolutional Neural Network acceleration on the Intel Xeon Phi
 
Deep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon PhiDeep Convolutional Network evaluation on the Intel Xeon Phi
Deep Convolutional Network evaluation on the Intel Xeon Phi
 
Cuda project paper
Cuda project paperCuda project paper
Cuda project paper
 
D0364017024
D0364017024D0364017024
D0364017024
 
imagefiltervhdl.pptx
imagefiltervhdl.pptximagefiltervhdl.pptx
imagefiltervhdl.pptx
 
Engineering Portfolio of Isaac Bettendorf
Engineering Portfolio of Isaac BettendorfEngineering Portfolio of Isaac Bettendorf
Engineering Portfolio of Isaac Bettendorf
 
Technical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_ProjectsTechnical Documentation_Embedded_Image_DSP_Projects
Technical Documentation_Embedded_Image_DSP_Projects
 
The Principle Of Ultrasound Imaging System
The Principle Of Ultrasound Imaging SystemThe Principle Of Ultrasound Imaging System
The Principle Of Ultrasound Imaging System
 
Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...
 
Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...Designing of telecommand system using system on chip soc for spacecraft contr...
Designing of telecommand system using system on chip soc for spacecraft contr...
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
 
Imaging automotive 2015 addfor v002
Imaging automotive 2015   addfor v002Imaging automotive 2015   addfor v002
Imaging automotive 2015 addfor v002
 
Multi Processor Architecture for image processing
Multi Processor Architecture for image processingMulti Processor Architecture for image processing
Multi Processor Architecture for image processing
 
MAJOR PROJECT
MAJOR PROJECT MAJOR PROJECT
MAJOR PROJECT
 
Cloud Graphical Rendering: A New Paradigm
Cloud Graphical Rendering:  A New ParadigmCloud Graphical Rendering:  A New Paradigm
Cloud Graphical Rendering: A New Paradigm
 
ARPS Architecture
ARPS ArchitectureARPS Architecture
ARPS Architecture
 
Hairong Qi V Swaminathan
Hairong Qi V SwaminathanHairong Qi V Swaminathan
Hairong Qi V Swaminathan
 
ELECTRONIC AND - Copy (1)
ELECTRONIC AND - Copy (1)ELECTRONIC AND - Copy (1)
ELECTRONIC AND - Copy (1)
 
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSFACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDS
 

BDL_project_report

  • 1. A Project Report on “Image Non-Uniformity Correction in Infrared Focal Plane Arrays- A Study and Implementation” Submitted by V. Shobha
  • 2. E & TC (Roll No. - 1104269) Under the guidance of Project Guide M.Vinod Kumar DGM (Assy), Milan BDL, Kanchanbhag School of Electronics Engineering, KIIT University (Established U/S of UGC Act) Bhubaneswar, Odisha, India Abstract The IRFPA is part of the electro-optical system that combines an optic system, infrared sensor, a high end Digital Signal Processor (DSP) and its associated electronics with display. An Infra-red Focal Plane Array (IRFPA) is an array of detectors aligned at the focal plane of the EO system. Every detector in the IRFPA can have different response under same stimulus, known as image non-uniformity. This non-uniformity leads to the presence of a Fixed Pattern Noise (FPN) in the
  • 3. resulting images, degrading the resolving capabilities of the EO system. In order to remove this non-uniformity we are going for non-uniformity correction. The goal of Non-uniformity Correction (NUC) is to reduce the magnitude of spatial noise below the temporal noise level. This NUC is done using source transformation techniques using two- point NU algorithm through software. Software code is developed in VisualDSP++ Integrated Development & Debugging Environment (ID&DE) The commonly used technique is two point calibration method in which FPA is calibrated at two known temperatures using black body data. The gain and offset of the detector output are calibrated across the array so that FPA produces uniform and radio metrically accurate output at these two reference temperatures. The graphs of temperature vs. corrected output and temperature vs. Root Mean Square Error (RMSE) are plotted showing the results of image Non- uniformity Correction (NUC). Contents Sl No. Name Page No. Abstract i List of Figures v
  • 4. List of Abbreviations vii 1. Introduction 1 1.1 Background 1 1.2 Choice of Digital Signal Processor 2 2 TigerSHARC Processor (TS101S) 6 2.1 General Description 7 2.2 Memory Map 8 2.2.1 On chip SRAM Memory 8 2.2.2 External Port 9 2.3 Booting 11 2.3.1 Selecting the boot mode 11 2.3.2 EPROM/FLASH Device Boot 13 2.3.3 Host Boot 14 2.3.4 Link Port Boot 14 2.3.5 No Boot Mode 15 2.4 VisualDSP++ (IDDE) 16 2.4.1 Code development tools 18 Sl No . Name Page No. 2.4.2 Parts of User Interface 19 2.4.3 Create a new project 21 2.4.4 Add a Source file to the project 22
  • 5. 2.4.5 Build and Run the program 23 2.4.6 Open a Plot Window 24 3 Infrared Focal Plane Array 26 3.1 Basic Terms/Definitions 26 3.1.1 Integration Time 26 3.1.2 Responsivity of Detector 26 3.1.3 Pixel 26 3.1.4 Resolution 27 3.1.5 Read Out Integrated Circuit (ROIC) 27 3.1.6 Hybrid and Monolithic sensors 27 3.1.7 Fill factor of FPA 27 3.2 Non-Uniformity in IRFPA 27 3.3 Types of Non-Uniformity Correction techniques 28 3.3.1 Calibration/Source based techniques 28 3.3.2 Scene based techniques 29 3.4 Block Diagram of Test setup 29 3.4.1 Black body 29 3.4.2 IR Sensor 30 Sl No. Name Page No. 3.4.3 Cooling system 30 3.4.4 Test board 30 4 Two Point Non-Uniformity Correction 31 4.1 Calibration/Reference Temperatures 31
  • 6. 4.2 Sensor Non-Uniformity 32 4.3 Graphical representation of NUC 32 4.4 Two point NUC technique 34 4.6 Image Debugging and Analysis Software (IDEAS) 36 4.7 Operation of Test Board 38 4.8 Software Implementation 39 5 Observations and Results 42 5.1 Temperature vs. Corrected pixel output 42 5.2 Temperature vs. RMSE 44 5.3 Real time Examples 46 6 Conclusion & Future Scope 48 References Appendix List of Figures No. Title Page No. 1.1 Von Neumann Architecture 2 1.2 Harvard Architecture 3
  • 7. 1.3 Super Harvard Architecture 3 1.4 Typical DSP Architecture (Analog Devices SHARC DSP) 4 2.1 Functional Block Diagram of TS101S 7 2.2 Memory Map 10 2.3 Single Processor Configuration 12 2.4 PROM Booting 13 2.5 IRQ3-0 Interrupt Vector Table 15 2.6 The VDSP++ Environment 17 2.7 The VDSP++ Environment icon 20 2.8 Step 4 in creating a project; select the processor type 21 2.9 Step 1 to Add a Source File 22 2.10 To build a project 23 2.11 To run a program 23 2.12 Plot Configuration dialog box 24 2.13 Plot settings dialog box 25 3.1 Block diagram of Test Setup 29 4.1 Two pixels output with different gain and offset 32 4.2 Two pixels output with gain compensated 33 4.3 Two pixels output with offset compensated 33 4.4 Two pixels output with gain and offset compensated 34 4.5 Image Debugging & Analysis Software Environment (Selecting the array size) 36 4.6 Image Debugging & Analysis Software Environment (Showing pixel intensities in decimal) 37 5.1 Corrected pixel values for different integration times at different temperatures (Expressions window) 42 5.2 Temperature vs. corrected pixel output 43
  • 8. 5.3 RMSE values for different integration times at different temperatures (Expressions window) 44 5.4 Temperature vs. RMSE 45 5.5 Real time uncorrected (or) raw image 46 5.6 Uniform image is obtained after performing 2 point NUC algorithm 46 5.7 Example of two point NUC implementation 47
  • 9. List of Abbreviations ATR - Automatic Target Recognition EO - Electro-optical sensor NUC - Non-Uniformity Correction VDSP - Visual Digital Signal Processing FPN - Fixed Noise Pattern IRFPA - Infrared Focal Plane Array ID&DE- Integrated Development & Debugging Environment FPA - Focal Plane Array CPU - Central Processing Unit ALU - Arithmetic Logic Unit BGA - Ball Grid Array SRAM - Static Random Access Memory SIMD - Single Instruction Multiple Data VLIW - Very Large Instruction Word ROIC - Read Out Integrated Circuit RNU - Residual Non-uniformities IDEAS- Image Debugging and Analysis Software LWIR - Long Wavelength Infra-Red
  • 10. Acknowledgement This project work would not be completed without the acknowledgement paid to all those who helped us during our project work, whose constant guidance, support and encouragement have crowned our efforts with success. It is a great pleasure to express our profound sense of gratitude to our project guide Mr. Vinod Kumar, DGM (Assy) Milan, Bharat Dynamics Limited and our project mentor Mr. D.Shashi Ratna, Sr. Manager, Bharat Dynamics Limited for their valuable and inspiring guidance, suggestions and encouragement throughout the course of our learning experience. V. Shobha shobha.vissa20@gmail.com
  • 11. Chapter 1: Introduction 1.1 Background The project titled “Non-Uniformity correction (NUC) in Infrared Focal Plane Arrays (IRFPA) – A Study & Implementation” is a major conception behind image enhancement and modification in modern thermal imaging systems. The image captured by the IRFPA contains non-uniformity which can be eliminated to a large extent using various non-uniformity techniques. The IRFPA is a part of electro-optical system which employs an array of lens at the front end to capture images both during day and night and a high end DSP for signal processing at the rear end. The individual pixels of IRFPA give different response under the same stimulus which results in non-uniformity. This non-uniformity leads to a degraded image and can be corrected using a black body calibrated at two known temperatures by non-uniformity correction algorithms. This non-uniformity is also known as fixed pattern noise. Two-point NUC algorithm has been implemented using the ADSP TS101 SHARC DSP simulator. The results of implementation are shown in the later chapters. The project aims at finding a solution to correct the non-uniformity present in the raw image acquired by the thermal imaging system and thereby restores the actual image content using source based techniques. The resultant image presents greater intensity and pixel clarity. The concept of Image non-uniformity correction in IRFPA is employed in the defense arenas like military, navy, air force for automatic target recognition (ATR) and Target tracking applications wherein the external environment is not always suitable for capturing clear images whenever required. Hence the acquired image is made clearer and suitable for further analysis by using non-uniformity correction techniques. The other applications and markets for thermal imaging technology include, stabilized thermal imaging cameras for law enforcement aircraft, radiometry devices for use in monitoring industrial systems, and thermal imaging systems for use in ground-based security and search and rescue.
  • 12. The usage of IRFPA makes the image capturing possible even in the absence of light. Hence this technology is extremely useful in the key areas of astronomy and space research. 1.2 Choice of the Digital Signal Processor One of the biggest bottlenecks in executing DSP algorithms is transferring information to and from memory. This includes data, such as samples from the input signal and the filter coefficients, as well as program instructions, the binary codes that go into the program sequencer. Figure 1.2 below shows how this seemingly simple task is done in a traditional microprocessor. This is often called a Von Neumann architecture, after the brilliant American mathematician John Von Neumann (1903-1957). A Von Neumann architecture contains a single memory and a single bus for transferring data into and out of the central processing unit (CPU). Multiplying two numbers requires at least three clock cycles, one to transfer each of the three numbers over the bus from the memory to the CPU. The Von Neumann design is quite satisfactory when you are content to execute all of the required tasks in serial. In fact, most computers today are of the Von Neumann design. We only need other architectures when very fast processing is required. Fig 1.1: Von Neumann architecture This leads us to the Harvard architecture, shown in Fig 1.3. Harvard architecture uses separate memories for data and program instructions, with separate buses for each. Since the buses operate independently, program instructions and data can be fetched at the same time, improving the speed over the single bus design. Most present day DSPs use this dual bus architecture.
  • 13. Fig 1.2: Harvard architecture Figure 1.4 illustrates the next level of sophistication, the Super Harvard Architecture. This term was coined by Analog Devices to describe the internal operation of their ADSP-2106x and new ADSP-211xx families of Digital Signal Processors. These are called SHARC® DSPs, a contraction of the longer term, Super Harvard ARChitecture. The idea is to build upon the Harvard architecture by adding features to improve the throughput. While the SHARC DSPs are optimized in dozens of ways, two areas are important enough to be included in fig: an instruction cache, and an I/O controller. Fig 1.3: Super Harvard architecture The SHARC DSPs provides both serial and parallel communications ports. These are extremely high speed connections. Dedicated hardware allows these data streams to be
  • 14. transferred directly into memory (Direct Memory Access, or DMA), without having to pass through the CPU's registers. In other words, tasks like obtaining a sample or moving the output happen independently and simultaneously with the other tasks; no cycles are stolen from the CPU. The main buses (program memory bus and data memory bus) are also accessible from outside the chip, providing an additional interface to off-chip memory and peripherals. At the top of the diagram are two blocks labeled Data Address Generator (DAG), one for each of the two memories. These control the addresses sent to the program and data memories, specifying where the information is to be read from or written to. In simpler microprocessors this task is handled as an inherent part of the program sequencer, and is quite transparent to the programmer. Fig 1.4: Typical DSP architecture (Analog Devices SHARC DSP)
  • 15. The data register section of the CPU is used in the same way as in traditional microprocessors. In the ADSP-2106x SHARC DSPs, there are 16 general purpose registers of 40 bits each. The math processing is broken into three sections, a multiplier, an arithmetic logic unit (ALU), and a barrel shifter. The multiplier takes the values from two registers, multiplies them, and places the result into another register. The ALU performs addition, subtraction, absolute value, logical operations (AND, OR, XOR, NOT), conversion between fixed and floating point formats, and similar functions. Elementary binary operations are carried out by the barrel shifter, such as shifting, rotating, extracting and depositing segments, and so on.
  • 16. Chapter 2: TigerSHARC Processor (TS101S) Since the NUC is a computationally intensive algorithm the SHARC processor viz.TS101 is used for the purpose. The processor boot mode that is used is EPROM booting which is explained in the following pages. The IDDE used is VDSP++ 4.0. A cycle accurate simulator for TS101 is used for implementation of NUC. An emulator is used for transferring the application code to the EPROM. The ADSP-TS101S TigerSHARC processor is the first member of the TigerSHARC processor family. The ADSP-TS101S TigerSHARC processor is an ultrahigh performance, static superscalar processor optimized for large signal processing tasks and communications infrastructure. The DSP combines very wide memory widths with dual computation blocks supporting 32- and 40-bit floating-point and 8-, 16-, 32-, and 64-bit fixed-point processing—to set a new standard of performance for digital signal processors. The block diagram (Fig 2.1) of the ADSP-TS101S TigerSHARC processor and some key features include:  Operating frequency of 300MHz and an instruction cycle time of 3.3ns  19 mm × 19 mm (484-ball) or 27 mm × 27 mm (625-ball) PBGA (Plastic Ball Grid Array) package  Dual compute blocks, each consisting of an ALU, multiplier, 64-bit shifter, and 32-word register file and associated data alignment buffers (DABs)  Dual integer ALUs (IALUs), each with its own 31-word register file for data addressing  A program sequencer with instruction alignment buffer (IAB), branch target buffer (BTB), and interrupt controller  Three 128-bit internal data buses, each connecting to one of three 2M bit memory banks  On-chip SRAM (6Mbit)  An external port that provides the interface to host processors, multiprocessing space (DSPs), off-chip memory mapped peripherals, and external SRAM and SDRAM  A 14-channel DMA controller  Four link ports
  • 17.  Two 64-bit interval timers and timer expired pin  A 1149.1 IEEE compliant JTAG test access port for on-chip emulation
  • 18. Fig 2.1: Functional Block Diagram of TS101S 2.1 General Description The ADSP-TS101S provides a high performance Static Superscalar DSP operations, optimized for telecommunications infrastructure and other large, demanding multiprocessor DSP applications This architecture is superscalar in that the ADSP-TS101S processor’s core can execute simultaneously from one to four 32-bit instructions encoded in a very large instruction word (VLIW) instruction line using the DSP’s dual compute blocks. In addition, the ADSP-TS101S supports SIMD operations two ways—SIMD compute blocks and SIMD computations. The programmer can direct both compute blocks to operate on the same data (broadcast distribution) or on different data (merged distribution). In addition, each compute block can execute four 16-bit or eight 8-bit SIMD computations in parallel. Using its Single-Instruction, Multiple- Data (SIMD) features, the ADSP-TS101S can perform 2.4 billion 40-bit MACs or 600 million 80-bit MACs per second. Advantages of Ball Grid Array  It occupies less space.
  • 19.  It reduces the number of connections.  Reliability increases.  It is smaller, cheaper and lighter. 2.2 Memory Map The memory map is divided into four memory areas—host space, external memory, multiprocessor space, and internal memory—and each memory space, except host memory, is subdivided into smaller memory spaces. 2.2.1 On-chip SRAM Memory — The ADSP-TS101S has 6M bits of on-chip SRAM memory, divided into three blocks of 2M bits (64K words × 32 bits). Each block—M0, M1, and M2—can store program, data, or both, so applications can configure memory to suit specific needs. Each internal memory block connects to one of the 128-bit wide internal buses—block M0 to bus MD0, block M1 to bus MD1, and block M2 to bus MD2—enabling the DSP to perform three memory transfers in the same cycle. The DSP’s internal bus architecture provides a total memory bandwidth of 14.4G bytes per second, enabling the core and I/O to access eight 32-bit data words (256 bits) and four 32-bit instructions each cycle. 2.2.2 External port (Off-Chip Memory/Peripherals Interface) — The ADSP-TS101S processor’s external port provides the processor’s interface to off- chip memory and peripherals. Host Interface The ADSP-TS101S provides an easy and configurable interface between its external bus and host processors through the external port. The host can directly read or write the internal memory of the ADSP-TS101S, and it can access most of the DSP registers, including DMA control (TCB) registers.
  • 20. Multiprocessor Interface The ADSP-TS101S offers powerful features tailored to multiprocessing DSP systems through the external port and link ports. The external port supports a unified address space that enables direct inter processor accesses of each ADSPTS101S processor’s internal memory and registers. The DSP’s on-chip distributed bus arbitration logic provides simple, glue less connection for systems containing up to eight ADSPTS101S processors and a host processor. SDRAM Interface The SDRAM interface provides a glue less interface with standard SDRAMs—16M bit, 64M bit, 128M bit, and 256M bit. The DSP directly supports a maximum of 64M words × 32 bits of SDRAM. The SDRAM interface is mapped in external memory in the DSP’s unified memory map. EPROM Interface The EPROM or flash memory interface is not mapped in the DSP’s unified memory map. It is a byte address space limited to a maximum of 16M bytes (24 address bits). The EPROM or flash memory interface can be used after boot via a DMA.
  • 21.
  • 22. Fig 2.2: Memory Map 2.3 Booting Booting is the process of loading the boot loader, initializing memory, and starting the application on the target. The Integrated Development and Debugging Environment (IDDE) provides support for the creation of a bootable image. This image is comprised of a loader kernel and the user’s
  • 23. application code. The IDDE includes loader kernels specific to each boot type. The boot loader kernels are 256-word assembly source code routines that perform memory initialization on the target. The default boot loader kernels work in conjunction with the loader utility supplied with IDDE tools. The loader utility takes the user’s TigerSHARC processor executable file along with the boot loader kernel executable file and produces a bootable image file. The bootable image file defines how the various blocks of TigerSHARC processor’s internal memory and optional external system memory are to be initialized. 2.3.1 Selecting the Booting Mode The two modes for booting are Master and Slave mode. Master boot accesses an EPROM or FLASH device, and slave booting is initiated through the link port or through the external port by a host (another TigerSHARC processor, for example). The state of the external BMS pin determines the booting method. If the BMS pin is sampled low during reset, for example, it results in an EPROM or FLASH device boot. If the BMS pin is sampled high during reset, this causes the processor to go into idle. When the processor is in the idle state waiting for a host or link boot, any signal from the host or link causes a slave mode boot. Regardless of which boot mode (master or slave) is used, each shares a common boot process. The BMS pin determines the booting method.  Each DMA channel from which the TigerSHARC processor can boot is automatically configured for a 256-word (32-bit normal word) transfer.  Those first 256 instructions, called the loader kernel, automatically execute and perform additional DMAs to load the application executable code and data into internal and/or external memory.  Finally, the loader kernel overwrites itself with the application’s first 256 words.
  • 24. Fig 2.3: Single Processor Configuration 2.3.2 EPROM/FLASH Device Boot The EPROM boot is selected as default. The BMS pin is used as the strap option for the selection—if the BMS pin is sampled low during reset, the mode is EPROM boot. After reset in EPROM boot, DMA channel 0 is automatically configured to perform a 256-word block transfer from an 8-bit external boot EPROM, starting at address 0 to internal
  • 25. memory, locations 0x00-0xFF. The DMA channel 0 interrupt vector is initialized to internal memory address 0x0. An interrupt occurs at the completion of the DMA channel 0 transfer and the TigerSHARC processor starts executing the boot loader kernel at internal memory location 0x0. Fig 2.4: PROM Booting The boot loader kernel then brings in the application code and data through a series of single-word DMA transfers. Finally, the boot loader kernel overwrites itself with the application code, leaving no trace of itself in TigerSHARC processor internal memory. When this DMA process completes, the IVT entry of DMA channel 0 points to internal memory address 0, allowing the user’s application code to begin execution. 2.3.3 Host Boot Booting the TigerSHARC processor from a 32-bit or 64-bit host processor is performed via the data and address buses of the external port. The BMS pin is used as the strap option for the selection. If the BMS pin is sampled high during reset, this causes the processor to go into idle and disables master mode boot DMA. When
  • 26. the processor is in idle state waiting for a host or link boot, any signal from the host or link causes a slave mode boot. Host boot uses the TigerSHARC processor Auto DMA channels. Either Auto DMA channel can be used since both Auto DMA channels (AUTODMA0 and AUTODMA1) are active and initialized at reset to transfer 256 words of code and/or data into the TigerSHARC processor's internal memory block 0, locations 0x00-0xff. The corresponding DMA interrupt vectors are initialized to 0. An interrupt occurs at the completion of the DMA transfer and the TigerSHARC processor starts executing the boot loader kernel at internal memory location 0x0. It is intended that these first 256 words act as a boot loader to initialize the rest of TigerSHARC processor internal memory. The boot loader kernel then brings in the application code and data through a series of single-word DMA transfers. Finally, the boot loader kernel overwrites itself with the application code, leaving no trace of itself in TigerSHARC processor internal memory. When this series of DMA processes completes, the IVT entry of Auto DMA channel 0 (and Auto DMA channel 1) points to internal memory address 0, allowing the user’s application code to begin execution. 2.3.4 Link Port Boot Any link port can be used for booting, since all link ports are active and waiting to receive data upon power up reset or after a hard reset. Link port boot uses TigerSHARC processor's link port DMA channels. All link port DMAs are initialized to transfer 256 words to TigerSHARC processor's internal memory block 0, locations 0x00-0xFF. An interrupt occurs at the completion of the DMA transfer and the TigerSHARC processor starts executing the boot loader kernel at internal memory location 0x0. It is intended that these first 256 words act as a boot loader to initialize the rest of TigerSHARC processor internal memory. The boot loader kernel then brings in the application code and data through a series of single-word DMA transfers. Finally, the boot loader kernel overwrites itself with the application code, leaving no trace of itself in TigerSHARC processor internal memory. When this series of DMA processes completes, the IVT entry of the link port DMA channel points to internal memory address 0, allowing the user’s application code to begin execution. 2.3.5 No Boot Mode
  • 27. Starting the processor in no boot is a master boot mode—a boot mode in which the TigerSHARC processor starts and controls the external data fetch process. In no boot mode, the TigerSHARC processor starts from an IRQ vector (externally or internally) fetching data. When a host or link boot mode is selected, the ADSP-TS101 processor enters an idle state after reset, waiting for the host or link port to boot it. It does not have to be booted by the host or a link port. If external interrupts IRQ3–0 are enabled (selected at reset by the IRQEN strap pin), they can be used to force code execution according to the default interrupt vectors. Fig 2.5: IRQ3-0 Interrupt Vector Table 2.4 VisualDSP++ (Integrated Development & Debugging Environment) VisualDSP++ is an integrated development and debugging environment (IDDE) to the Analog Devices development tools suite for processors. The VisualDSP++ single, integrated project management and debugging environment provides complete graphical control of the edit, build, and debug process. As an integrated environment, you can move easily between editing, building, and debugging activities.
  • 28. VisualDSP++ provides these features: Extensive editing capabilities. Create and modify source files by using multiple language syntax highlighting, drag-and-drop, bookmarks, and other standard editing operations. View files generated by the code development tools.  Flexible project management. Specify a project definition that identifies the files, dependencies, and tools that you use to build projects. Create this project definition once or modify it to meet changing development needs.  Easy access to code development tools. Analog Devices provides these code development tools: C/C++ compiler, assembler, linker, splitter, and loader. Specify options for these tools by using dialog boxes instead of complicated command-line scripts. Options that control how the tools process inputs and generate outputs have a one-to-one correspondence to command-line switches. Define options for a single file or for an entire project. Define these options once or modify them as necessary.  Flexible project build options. Control builds at the file or project level. VisualDSP++ enables you to build files or projects selectively, update project dependencies, or incrementally build only the files that have changed since the previous build. View the status of your project build in progress. If the build reports an error, double-click on the file name in the error message to open that source file. Then correct the error, rebuild the file or project, and start a debug session.  VisualDSP++ Kernel (VDK) support. Add VDK support to a project to structure and scale application development. The Kernel page of the Project window enables you to manipulate events, event bits, priorities, semaphores, and thread types.  Flexible workspace management. Create up to ten workspaces and quickly switch between them. Assigning a different project to each workspace enables you to build and debug multiple projects in a single session.  Easy movement between debug and build activities. Start the debug session and move freely between editing, build, and debug activities.
  • 29.
  • 30. Fig 2.6: The VDSP++ Environment VisualDSP++ reduces debugging time by providing these key features: Easy-to-use debugging activities. Debug with one common, easy-to-use interface for all processor simulators and emulators, or hardware evaluation and development boards. Switch easily between these targets.  Multiple language support. Debug programs written in C, C++, or assembly, and view your program in machine code. For programs written in C/C++, you can view the source in C/C++ or mixed C/C++ and assembly, and display the values of local variables or evaluate expressions (global and local) based on the current context.  Effective debug control. Set breakpoints on symbols and addresses and then step through the program’s execution to find problems in coding logic. Set watch points (conditional breakpoints) on registers, stacks, and memory locations to identify when they are accessed.  Tools for improving performance. Use the trace, profile, and linear and statistical profiles to identify bottlenecks in your DSP application and to identify program optimization
  • 31. needs. Use plotting to view data arrays graphically. Generate interrupts, outputs, and inputs to simulate real-world application conditions. 2.4.1 Code Development tools Code development tools include: C/C++ compiler  Run-time library with over 100 math, DSP, and C run-time library routines  Assembler  Linker  Splitter  Loader  Simulator  Emulator These tools enable you to develop applications that take full advantage of your processor’s architecture. The VisualDSP++ linker supports multiprocessing, shared memory, and memory overlays. The code development tools provide these key features: Easy-to-program C, C++, and assembly languages. Program in C/C++, assembly, or a mix of C/C++ and assembly in one source. The assembly language is based on an algebraic syntax that is easy to learn, program, and debug.  Flexible system definition. Define multiple types of executables for a single type of processor in one Linker Description File (.LDF). Specify input files, including objects, libraries, shared memory files, overlay files, and executables.  Support for overlays, multiprocessors, and shared memory executables. The linker places code and resolves symbols in multiprocessor memory space for use by multiprocessor systems. The loader enables you to configure multiple processors with less code and faster boot time. Create host, link port, and PROM boot images. 2.4.2 Parts of User Interface
  • 32. VisualDSP++ is an intuitive user interface for programming Analog Device Processors. When the VisualDSP++ icon is clicked, the main window appears. This work area contains everything you need to build, manage and debug a project. Within the main application window frame, VisualDSP++ provides:  Title bar  Menu bar  Project window  Editor window  Control menu  Output window  Toolbars  Status bar  Expressions [Hexadecimal] window  Disassembly window
  • 33. Fig 2.7: The VDSP++ Environment icon VisualDSP++ provides many debugging windows to view what’s going on. The programmer needs to learn only one interface to debug all the DSP applications. VisualDSP++ supports ELF/DWARF-2 (Executable Linkable Format) executable files and also all executable file formats produced by the Linker. 2.4.3 Create a New Project To create a new project, 1. From the File menu, choose New and then Project to open the Project Wizard. 2. Click the browse button to the right of the ‘Directory field’ to open the Browse for Folder dialog box to select the directory in which the project could be stored. 3. In the ‘Project name’ field, enter the project’s name and click ‘Next’. 4. In the Project: Output type window, choose the processor type. For example, in this case we use ADSP-TS101 TigerSHARC processor. Then click ‘Next’. 5. Clicking ‘Finish’ in Finish window to create the project.
  • 34. Fig 2.8: Step 4 in creating a project; select the processor type 2.4.4 Add a Source Files to the Project To any file to the source folder of the project for compilation, 1. Right click on the ‘Source Files’ in the Project window on the left side. 2. Select ‘Add File(s) to Folder’ option in it which opens the Add Files window. 3. Select the required Source file from the Add files window and click ‘Open’ which adds the corresponding file to the Source folder.
  • 35.
  • 36. Fig 2.9: Step 1 to Add a Source File 2.4.5 Build and Run the Program To build the project, select the Project menu choose ‘Build project’ or press F7 directly.
  • 37.
  • 38. Fig 2.10: To build a project To run the program, select the Debug menu and choose ‘Run’ or press F5 directly. Fig 2.11: To run a program 2.4.6 Open a Plot Window To open a plot window: 1. From the View menu, choose Debug Windows and Plot. Then choose New to open the Plot Configuration dialog box. 2. In the Plot group box, specify the following values. 3. In the Type box, select type of plot from the drop-down list. a. In the Title box, type the title 4. Enter two data sets to plot by using the values. After entering each data set, click Add to add the data set to the Data sets list on the left of the dialog box. 5. Click OK to apply the changes and to open a plot window with these data sets. 6. Right-click in the plot window and choose Modify Settings. On the General page, in the Options group box, select Legend and click OK to display the legend box.
  • 39. Fig 2.12: Plot Configuration dialog box
  • 40. Fig 2.13: Plot settings dialog box
  • 41. Chapter 3: Infrared Focal Plane Array An Infrared focal plane array, also known as staring array, is an array of detectors aligned at the focal plane of the imaging system. Every detector in the IRFPA can have different response under the same stimulus, known as non-uniformity. This non-uniformity leads to the presence of a Fixed pattern noise (FPN) in the resulting images, degrading the resolving capabilities of the thermal imaging system. The most common sources of FPN are inaccuracies in the fabrication process, variations in the read-out electronics and a decrease in the signal intensity at the edges of the image caused by the sensor optics. Infrared (IR) thermal imagers, also known as infrared focal plane arrays (IRFPA), have been used in military applications for many years. An IR thermal imager is a camera that provides a picture of the electromagnetic energy radiated from an object in the IR spectral band. A number of detector technologies have been designed and optimized for imaging in the IR spectral band, each posing unique design challenges. Most modern IR thermal imagers used for detection in iR spectral bands are based on focal plane arrays (FPA). 3.1 Basic terms/Definitions The basic terms related to FPA are discussed below: 3.1.1 Integration Time: Integration time is the amount of time the system is to be exposed to the radiation. Every detector has a varied integration time and is measured in micro seconds. 3.1.2 Responsivity of detector: The Responsivity of the detector is defined as the ratio of output to input. This is one of the important features that determine the performance of detector. 3.1.3 Pixel:
  • 42. Pixel is the shortcut used for the picture element. A pixel is a single point in a graphic image. Graphics monitors display pictures by dividing the display screen into thousands (or millions) of pixels arranged in rows and columns. The pixels are so close together that they appear connected. 3.1.4 Resolution: The quality of display system largely depends on its resolution, how many pixels it can display and how many bits are used to represent each pixel. 3.1.5 Read Out Integrated Circuit (ROIC): ROIC is an integrated circuit specifically used for reading detectors of a particular type. They are compatible with different types of detectors such as infrared and ultraviolet. 3.1.6 Hybrid and Monolithic sensors: Monolithic sensors are the sensors in which both sensing and non-sensing areas are present on the same level whereas hybrid sensors are the sensors in which both sensing and non-sensing areas are present on different levels. Hybrid sensors have more pixels and more resolution compared to the monolithic sensors. 3.1.7 Fill factor of FPA: It is defined as the ratio of sensing area to non-sensing area. It is expressed in percentage. Sensing area is defined as the area where the pixels are present and non-sensing area is the area where the connections are made. 3.2 Non-Uniformity in IRFPA To characterize the performance of an IRFPA, the FPA specific parameters such as detector-to-detector uniformity (non-uniformity) and dead pixel count should be assessed. FPAs are made up of a multitude of detector elements, where each individual detector has different gain and offset that change with time, due to detector-to-detector variability in the FPA fabrication process, sensor operating temperature, temperature of the observed scene, electronic read-out noise, etc. The difference in gain and offset among detectors produce fixed-pattern noise (FPN) in the acquired imagery.
  • 43. Causes of Non-uniformity  Lack of control over fabrication process  Signal variation at the edges of the lens systems  Slight variation in cooling also results in non-uniformity  Variations in ambient or scene temperature Furthermore, this spatial non-uniformity fluctuates slowly with time due to variations in the FPA temperature, bias voltages and change in ambient or scene temperature. The goal of non- uniformity correction is to reduce the magnitude of the spatial noise below the temporal noise level. 3.3 Types of Non-Uniformity Correction (NUC) techniques There are mainly two types of NUC techniques: 1. Calibration/Source based techniques 2. Scene based techniques 3.3.1 Calibration/Source based techniques In the calibration method, FPA is calibrated at certain reference temperature using black body data. There are two types of source based NUC techniques: 1. Two point NUC 2. Three point NUC The commonly used techniques is two point calibration method in which FPA is calibrated at two known temperatures using black body data. The gain and offset of the detector output are calibrated across the array so that FPA produces uniform and radio metrically accurate output at these two reference temperatures. The reference temperatures T1 and T2 should be selected such that they should not be close enough which limits the operating temperature range. In two point
  • 44. NUC the expected pixel output is assumed to follow a straight line equation within the temperature range. In three point NUC, the expected pixel output is assumed to follow a quadratic law within the temperature range. FPA is calibrated at three distinct reference temperatures using black body data. 3.3.2 Scene based techniques Scene based techniques generally use an image sequence and rely on the scene parameters like motion or change in the actual scene, but do not provide the required radio metric accuracy and are also difficult to implement in real time applications. 3.4 Block Diagram of Test Setup Power Supply (+28V) Black body controller PC Test Board Black body Electro-optic (IR) sensor Power Supply (+28V)
  • 45. Cooling system Fig 3.1: Block diagram of Test Setup 3.4.1 Black body There are two types of black bodies:  Extended area black body  Point black body In extended area black body the entire surface will emit radiation. Calibrate and accurate values are noted. A black body controller is used to set up various temperatures for the black body. Both absolute and different temperature can be setup. 3.4.2 IR Sensor This is a photovoltaic sensor sensitive to infrared radiation in the LWIR region viz., 8-14 micro meters. This is a cooling sensor with a Joule-Thompson cooler. 3.4.3 Cooling system The main principle followed in the cooling system is “cooling by expansion” and the technique used is JT cooler. The temperature to which the detectors are cooled is 77K (or -196 degree centigrade). The hot air is passed through a piston and is sent to a closed chamber where air expands and thus the cooling procedure progresses. The system consists of pressurized air. A thin coil is used to send the hot air through the piston. The detectors are thus cooled using this technique. The resolution can be increased if we consider a cooling system instead of an un- cooled system as the un-cooled system results in less resolution. The un-cooled system provides a less clarity image and the image which is cooled is clearer when compared with this. 3.4.4 Test Board Test board is used to transmit commands and receive signals from the DSP using a serial link. It is used for synthesis and analysis of DSP signals. The video monitor present provides a
  • 46. video output and is displayed. The DC regulated power supplies are set at their respective voltages for the EOS system. Chapter 4: Two Point Non-Uniformity Correction To improve the image formed by the electro-optic sensor, the two point calibration technique is used, wherein the FPA is calibrated at two known temperatures using black body data. The gain and offset of the FPA are calibrated across the array so that it produces a uniform and radio metrically accurate output at these two reference temperatures. But this method requires halting of the camera operation and results in large residual non-uniformities (RNU) away from the calibration. 4.1 Calibration (or) Reference temperatures The temperatures at which the black body is maintained for the purpose of data acquisition are called as reference or calibration temperatures. The selection of the calibration temperature is also very important. The key factors to be considered in the process of selection are:  The reference temperatures T1 and T2 should be selected such that they should not be close enough which ultimately limits the operating temperature range.  The operating temperature range should not be large enough to give more residual non- uniformities (RNU) in the operating temperature range.
  • 47. The performance of the algorithm degrades as the temperature range increases i.e. the two point calibration scheme shows poor performance for large temperature ranges. The temperatures T1 and T2 at which the black body data is captured in our project are 20 and 36 degrees centigrade respectively. The operating temperatures of the entire system setup is the room temperature i.e. 27 degrees centigrade. 4.2 Sensor Non-Uniformity The detection of the infrared radiation is done by the Infrared sensor present in the EO sensor. The output of the sensor is given as input to the Analog to Digital Converter. The Infrared Focal Plane Array (IRFPA) sensor response is generally modeled as a first order linear relationship between the input irradiance and the detector output The output response Xij of the pixel (i,j) is Where aij, bij are the gain and offset non uniformities associated with (i,j)th pixel and xij is the irradiance received by the (i,j)th pixel. 4.3 Graphical Representation of Non-Uniformity Correction (NUC) The figure 4.1 below show two different pixel outputs with change in input irradiance, T1 and T2 are calibration temperatures which are used to calculate the gain and offset coefficients. The inequalities in those mathematical parameters are due to the presence of non-uniformity in the image. Hence, the non-uniformity correction technique is implemented in order to compensate these two coefficients and obtain a uniform image. A Detector Output
  • 48. B 0 T1 T2 Input irradiance Fig 4.1: Two pixels output with different gain and offset The figure shows the output response of the detector with the change in their input irradiance. T1 and T2 are the calibration temperatures which are used to calculate the gain and offset coefficients. In the figure 4.1 above, the slopes and the Y-intercepts of the detector response plot of both the pixels are unequal. That means, neither the gain nor the offset values of both the pixels have been compensated. Detector Output A B 0 T1 T2 Input irradiance
  • 49. Fig 4.2: Two pixels output with gain compensated In the fig 4.2 above, the slopes of both the pixels are same, which means the gain of the pixels is made equal by using the NUC algorithm. But the offset i.e., the Y-intercept is different for both the pixels and remains uncompensated. A Detector Output B 0 T1 T2 Input irradiance Fig 4.3: Two pixels output with offset compensated In the figure 4.3 above, the Y-intercepts of both the pixels is same, which means the offset values i.e., the Y-intercepts of both the pixels are made by using the NUC algorithm. But the gain i.e., the slopes is different for both the pixels and remains uncompensated. Detector Output A & B
  • 50. 0 T1 T2 Input irradiance Fig 4.4: Two pixels output with gain and offset compensated In the figure above, the straight lines A & B i.e., the detector output for both the pixels is same and hence they overlap each other. Here, the Y-intercepts and slopes of both the pixels is same and hence they overlap each other. Here, the Y-intercepts and slopes of both the pixels are same, which means the offset values and the gain of both the pixels is made equal by using the NUC algorithm. Hence, this graph shows the corrected pixel output with the gain and offset both compensated. In this way the gain and offset values are compensated and the resultant image has a higher resolution with better uniformity. 4.4 Two point Non-uniformity Correction (NUC) Technique Each pixel in the FPA is characterized by its offset level, its sensitivity or gain and its noise level. If an extended black body is viewed by an IR thermal imager, the levels measured by the individual pixels should all be close to the average level measured. Image non-uniformity correction in EO sensor is performed using two point NUC technique. In two point NUC, the measured signal input Yij is given by the following linear relationship. where aij is the gain and bij is offset.
  • 51. Solving for the above equation at two reference temperatures gives aij and bij as given below. Using these values, the actual values o the offset and gain are calculated and the corresponding graphs are drawn. aij= bij= Vlij, Vhij are the (i,j)th pixel intensities and Vl, Vh are the spatial averages of the image frames at lower and higher reference temperatures respectively and are given as Vl= Vh= where N=m*n is the total number of pixels in a frame and m and n are the number of rows and columns respectively. Now, using these equations the corrected pixel output Yij can be calculated. The equation used to calculate the corrected pixel output is 4.5 Image Debugging and Analysis Software
  • 52. This is the software that is used for capturing the black body image data i.e., the original uncorrected pixel intensity values. The figure below shows the working environment and the auto contrast uncorrected image of the black body. The array size of the FPA can be selected from the Select menu. Here, it is chosen to be a 2x2 array. The array is placed at the centre of the image taken.
  • 53. Fig 4.5: Image Debugging and Analysis Software Environment (Selecting the array size) The image data at a particular pixel is obtained in the decimal format by using this software. The figure below shows how the decimal value of each pixel is obtained by using this software.
  • 54.
  • 55. Fig 4.6: Image Debugging and Analysis Software Environment (Showing pixel intensities in decimal) 4.6 Operation of Test Board Flow chart depicting the operational flow Start Align the black body surface such that it faces the EO sensor Cool the IRFPA detector using cooling system
  • 56. Power up the EO sensor Set the black body to a low temperature Did the black body reach the desired temperature? NO Wait until black body reaches the desired temperature YES Capture the black body data using a PC Set the black body to a higher temperature Continue the same procedure for different black body temperatures Stop
  • 57. 4.7 Software Implementation Flow chart is drawn and software code is written for the two point non-uniformity correction (NUC) of the raw image. The flowchart for the software implementation is shown below: Start Open the image data in image acquisition system Select gate size (2x2) Place gate at location (row,column)=(60,60) Open the file containing grey levels Convert hexadecimal grey levels to decimal Populate the input temperature array at which data is being captured
  • 58. Populate the higher temperature array NO Is array fully populated? YES A A Populate the lower temperature array NO Is array fully populated? YES
  • 59. B Populate the uncorrected array at the first temperature Calculate high temperature array mean Calculate low temperature array mean Calculate corrected pixel output values using 2-point NUC correction formula Calculate mean of corrected pixel output Calculate error e=Yij-Y Calculate sum of squares (SOS) for error samples Calculate SOS/N Compute square root of above
  • 60. C C Store Y(mean) and RMSE samples in respective arrays at first temperature All temperature data entered? B Print temperature samples, corrected output samples and RMSE samples Plot graph for temperature vs. corrected output samples and temperature vs. RMSE Repeat for other integration times
  • 61. Plot graph for all the integration times Stop Chapter 5: Observations and Results The observations are taken from the graphs plotted which are obtained from implementation of the source code. The following graphs are plotted. 5.1 Temperature vs. Corrected pixel output
  • 62. Fig 5.1: Corrected pixel values for different integration times at different temperatures (Expressions window) The above figure shows the average of corrected pixel values for different integration times at different temperatures obtained using the VDSP++ (IDDE) simulator after implementing the source code for two point non-uniformity correction. They are displayed through the Expressions [Hexadecimal] window inside the VDSP++ environment. These values are plotted against the temperature to observe the nature of the graph.
  • 63. Fig 5.2: Temperature vs. corrected pixel output This graph shows the values of the averages of the corrected pixel outputs for temperatures between 16 and 40 degrees centigrade. It is observed that the graphs for all the three integration times is parallel to each other. 5.2 Temperature vs. RMSE
  • 64. Fig 5.3: RMSE values for different integration times at different temperatures (Expressions window) The above figure shows the Root Mean Square Error (RMSE) values for different integration times at different temperatures obtained using the VDSP++ (IDDE) simulator after implementing the source code for two point non-uniformity correction. They are displayed through the Expressions [Hexadecimal] window inside the VDSP++ environment. It can be observed that the RMSE values for the calibration temperatures are zero in the arrays.
  • 65. Fig 5.4: Temperature vs. RMSE This graph shows the root mean square error (RMSE) values at different temperatures for the three integration times. Here the reference temperatures have been taken as 20 and 36 degrees centigrade. We can observe from the above graph that the root mean square error at these two temperatures is zero for every integration time.
  • 66. 5.3 Real time Examples Fig 5.5: Real time uncorrected (or) raw image The above image is the real time image of some rocks and mountains that is captured by the IRFPA during the night. It is the raw image captured by the system. The two point NUC algorithm is implemented for its correction. Fig 5.6: Uniform image is obtained after performing 2 point NUC algorithm
  • 67. The figure above is the corrected (or) uniform real time image of the rocks and mountains. As it is observed, this picture is an enhanced and a clearer version. Another example can be seen in the figure below. Fig 5.7: Example of two point NUC implementation
  • 68. Chapter 6: Conclusion & Future Scope 6.1 Conclusion The uncorrected pixel intensity values have been acquired through Electro-optic sensors. These values correspond to the non-uniform image. As the non-uniformity cannot be eliminated using hardware, software code has been developed to correct the non-uniformity present in the image. This software code is developed on the principle of two point correction. The uncorrected pixel intensity values of the black body were noted at three different integration times for seven different temperatures. The two point non-uniformity correction algorithm was applied to the raw data at all the seven temperatures to get the corrected data. Root mean square error was also calculated at all the temperatures and it was observed to be zero at the calibration temperatures and has an infinite value at other points away from the calibration temperatures; which indicate that the non-uniformity is completely eliminated at calibration temperatures and reduced at other temperatures. Slopes of detector output at different integration times are equal, indicating gain and offset values to be corrected and the resulting image is a NUC image with greater clarity and pixel intensities. 6.2 Future Scope In this project, the Two point calibration NUC has been used. But for a greater accuracy and enhancing the image further, the Three point calibration NUC could be used which is an advanced version. Here three calibration temperatures are chosen for reference in the non- uniformity correction. In future, this project can also be extended and modified by using a Field Programmable Gate Array (FPGA) in the place of TigerSHARC digital signal processor in order to further improve the processing capabilities.
  • 69. SOURCE CODE The following code has been written in C language in the VDSP++ Integrated Development & Debugging Environment (ID & DE). #PROGRAM FOR NON-UNIFORMITY CORRECTION #include<defts101.h> #include<stdio.h> #include <math.h> #define N 4 int IT; int temp[7]={16,20,24,28,32,36,40}; float Xij_it400[4]; //Uncorrected input (diff int. times) float Xij_it500[4]; float Xij_it600[4]; float rmse_it400[7]; //RMSE for diff. integration times float rmse_it500[7]; float rmse_it600[7]; int k; float Yij[4]; //Corrected output float Ym_it400[7]; //Corrected output avg. float Ym_it500[7];
  • 70. float Ym_it600[7]; float Vlij[4]; //reference temp. inputs float Vhij[4]; void NUC(float X[],float Vl,float Vh) { int i; float y=0,Y; float Y_err[4],err_sqr[4],err=0; float sqr_avg=0,rms_err=0; for(i=0;i<4;i++) { Yij[i]=( ( (X[i]-Vlij[i])*(Vh-Vl) ) / (Vhij[i]-Vlij[i]) )+Vl; printf("%f n ",Yij[i]); y=y+Yij[i]; } Y=y/4; printf("%f",Y); if(IT==400) Ym_it400[k]=Y; else if(IT==500) Ym_it500[k]=Y; else if(IT==600) Ym_it600[k]=Y;
  • 72. printf("Enter higher reference temperature= n "); scanf("%i",&Vth); while(1) { printf("Enter integration time (ms)= n "); scanf("%d",&IT); if(IT==400) { k=0, l=0,h=0; printf("Enter uncorrected input at lower ref. temp.="); for(m=0;m<4;m++) { scanf("%f",&Vlij[m]); l=l+Vlij[m]; } Vl=l/4; printf("n %f n ",Vl); printf("Enter uncorrected input at higher ref. temp.= n "); for(m=0;m<4;m++) { scanf("%f",&Vhij[m]); h=h+Vhij[m]; }
  • 73. Vh=h/4; printf("n %f n ",Vh); for(p=0;p<7;p++) { printf("Enter uncorrected input at %d degrees= n ",temp[p]); for(m=0;m<4;m++) { scanf("%f",&Xij_it400[m]); } NUC(Xij_it400,Vl,Vh); k++; } } else if(IT==500) { k=0; l=0,h=0; printf("Enter uncorrected input at lower ref. temp.="); for(m=0;m<4;m++) { scanf("%f",&Vlij[m]); l=l+Vlij[m]; } Vl=l/4; printf("n %f n ",Vl);
  • 74. printf("Enter uncorrected input at higher ref. temp.="); for(m=0;m<4;m++) { scanf("%f",&Vhij[m]); h=h+Vhij[m]; } Vh=h/4; printf("n %f n ",Vh); for(p=0;p<7;p++) { printf("Enter uncorrected input at %d degrees=",temp[p]); for(m=0;m<4;m++) { scanf("%f",&Xij_it500[m]); } NUC(Xij_it500,Vl,Vh); k++; } } else if(IT==600) { k=0; l=0,h=0; printf("Enter uncorrected input at lower ref. temp.="); for(m=0;m<4;m++) {
  • 75. scanf("%f",&Vlij[m]); l=l+Vlij[m]; } Vl=l/4; printf("n %f n ",Vl); printf("Enter uncorrected input at higher ref. temp.="); for(m=0;m<4;m++) { scanf("%f",&Vhij[m]); h=h+Vhij[m]; } Vh=h/4; printf("n %f n ",Vh); for(p=0;p<7;p++) { printf("Enter uncorrected input at %d degrees=",temp[p]); for(m=0;m<4;m++) { scanf("%f",&Xij_it600[m]); } NUC(Xij_it600,Vl,Vh); k++; } } else
  • 76. printf("NUC FOR ALL THE INTEGRATION TIMES IS COMPLETED"); break; } }
  • 77. Appendix References [1]. Scientists and Engineers Guide for Digital signal Processing. ‘Chapter 1 and Chapter 28’ Description of the Architectures involved in designing the TigerSHARC Processor. [2].http://www.analog.com. The TigerSHARC processor data is obtained from this website along the data sheets.
  • 78. [3].http://datasheets360.com. This website has provided various data sheets requires for components used in the project.