Principles of Computed
Radiography
Computed Radiography
• a “cassette-based” system that uses a special
solid-state detector plate instead of a film
inside a cassette.
• The use of CR requires the CR cassettes and
phosphor plates, the CR readers and
technologist quality control workstation, and a
means to view the images, either a printer or
a viewing station.
Computed Radiography System
• Three major components
• Phosphor Imaging Plates
– To acquire x-ray image projections
• PIP Reader (scanner)
– To extract the electronic latent image
• Workstation
– For pre and post processing of the image.
Historical Perspective
• 1973 – George Luckey, a research scientist filed
a patent application titles Apparatus and
Method for Producing Images Corresponding
to Patterns of High Energy Radiation.
• 1975 – George Luckey patent (USD 3,859,527)
was approved and Kodak patented the first
scanned storage phosphor system that gave
birth to modern computed radiography.
Historical Perspective
• 1980’s – a lot of companies applied for a patent in
George Luckey’s invention.
• 1983 – Fuji Medical Systems was the first to
commercialize and complete the CR system
• In the early 1990s, CR began to be installed at a
much greater rate because of the technological
improvements that had occurred in the decade
since its introduction.
• The first system consisted of a phosphor storage
plate, a reader, and a laser printer to print the
image onto film.
Structure and Mechanism
• The CR cassette contains a solid-state plate
called a photostimulable storage phosphor
imaging plate (PSP) or (IP) that responds to
radiation by trapping energy in the locations
where the x-rays strike.
Structure and Mechanism
• Photostimulable Luminescence
• It refers to the emission of light after stimulation of a
relevant light source or when exposed to a different light
source
Imaging Plate
• It is house in a rugged cassette that appears
similar to screen- film cassette.
• It is handled in the same manner as a screen
film cassette.
Imaging Plate
Imaging plate
Protective layer: This is a very thin, tough, clear
plastic that protects the phosphor layer from
handling trauma.
Imaging Plate
Phosphor layer: This is the active layer. This is
the layer of photostimulable phosphor that
traps electrons during exposure. It is typically
made of barium fluorohalide phosphors.
Imaging Plate
Reflective Layer - This is a layer that sends light in
a forward direction when released in the cassette
reader. This layer may be black to reduce the
spread of stimulating light and the escape of
emitted light
Imaging Plate
Conductive layer: This layer grounds the plate to
reduce static electricity problems and to absorb
light to increase sharpness.
Imaging Plate
Support layer: This is a semirigid material that
provides the imaging sheet with strength and is
a base for coating the other layers.
Imaging Plate
Backing layer: This is a soft polymer that
protects the back of the cassette. The radiation
dose from a CR
Imaging Plate
• Cassettes contain
barcode label on the
cassette or on the
imaging plate through a
window in a cassette.
• Label enables
technologist to match
information with
patient identifying
barcode.
CR Image Processing
1. When the Photostimulable phosphor or the (PSP)
screen is exposed to x-rays, energy will be absorbed by
the phospor crystals.
2. After the exposure the IP is inserted into a CR reader
3. The IP is processed by a scanning system or reader
which
1. Extracts the PSP screen from the cassette
2. Moves the screen across a high intensity scanning
laser beam
3. Blue violet light is emitted via PSL
4. Light energy is read by the photomultiplier tube, in
which converts the light into an electric signal.
CR Image Processing
4. The electronic signal is converted into a digital
format for manipulation, enhancement, viewing
and printing if desired.
5. The PSP screen is erased by a bright white
light inside the reader, reloaded in the cassette
and is ready for the next exposure.
• The white light dumps all the remaining excess energy
traps allowing the plates to be reused
Computed Radiography Acquisition
Latent Image Formation
1. The incident x-ray beam interacts with the
photostimulable phosphors that are in the
active layer of the imaging plate.
2. The x-ray energy is absorbed by the
phosphor and the absorbed energy excites
the europium atoms.
3. The electrons are raised to higher energy
state and are trapped in a so called
phosphor center in a metastable state
The CR Reader
• The CR reader is composed of mechanical,
optical and computer modules.
Mechanical Features
• When the CR cassette is inserted into the CR
reader, the IP is removed and is fitted to a
precision drive mechanism.
• There are two scan directions
– Fast Scan
• The movement of the laser across the imaging plate
• Aka “scan”
– Slow Scan
• The movement of the imaging plate through the reader
• Aka “translation or subscan direction”
Optical Features
• Components of the optical subsystem include
the laser, beam shaping optics, light collecting
optics, optical filters and a photodetector.
• The laser is used as the source of stimulating
light that spreads as it travels to the rotating
or oscillating reflector.
Optical Features
• The laser /light beam is focused to a reflector by a lens
system that keeps the laser diameter about 100um
• Using Special Beam optics allow the shape of the beam
to be constant size, shape, speed and intensity
• The laser beam is deflected across the IP.
• The reader scans the plate with red light in a zigzag or
raster pattern
• The emitted light from the IP is channeled into a
funnel of fiber optic collection assembly and is
directed at the photo detector, PMT and Photodiode
or CCD and send it to the ADC.
Computer Control
• The output of the optic collection assembly is a
time varying analog signal that is transmitted to a
computer system that has multiple functions .
• The analog signal is processed for amplitude scale
and compression.
• It shapes the final signal before the image is
formed.
• The analog signal is digitized in consideration of
proper sampling and quantization.
Computer Control
• The image buffer usually is a hard disc. This is
the place where a completed image can be
stored temporarily until it is transferred to a
workstation for interpretation or to an
archival computer.
Visible Image Formation
1. The CR Cassette is inserted to the CR Reader
2. The CR reader automatically extracts the imaging
plate from the special cassette
3. A finely focused beam of infrared light with a beam
diameter of 50 to 100um is directed to the PSP.
4. The energy of the laser light is absorbed at the
phosphor centers and the trapped electrons are
released.
5. The released electrons are absorbed by the europium
atoms that will release Blue violet Light from
europium (photostimulable luminescence)
Visible Image Formation
Film Screen Radiography VS Computed
Radiography
FILM CR
EXPOSURE MEDIUM FILM IMAGING PLATE
PROCESSING DARK ROOM CONDITIONS
AND CHEMISTRY REQUIRED
NO DARKROOM CONDITIONS
OR CHEMISTRY REQUIRED
PROCESSING TIME 8 MINUTES 1-3 MINUTES
EVALUATION FILM VIEWER COMPUTER WITH VIEWING
ANALYSIS SOFTWARE
ARCHIVING FILM ARCHIVE ROOM
(HUMIDYD AND
TEMPERATURE CONTROLLED
PC CLOUD/REMOTE
NETWORK SERVER
AVAILABILITY UNIQUE MASTER COPY UNLIMITED COPIES WITH
POSSIBILITY TO ACCESS TO
ANY LOCATION
Direct Digital Radiography
• Most digital radiography (cassette-less)
systems use an x-ray absorber material
coupled to a flat panel detector or a charged
coupled device (CCD) to form the image.
• DR uses an array of small solid state detectors
to convert incident x-ray photons to directly
form the digital image.
• DR system is that no handling of a cassette is
required as this is a “cassette-less” system.
Direct Digital Radiography
• DR can be divided into two categories: Indirect
capture and direct capture.
• Direct capture converts the incident x-ray
energy directly into an electrical signal.
• Indirect capture digital radiography devices
absorb x-rays and convert them into light.
CR DR
Cost Inexpensive to Moderate Expensive
Size Portable but generally
practice based
Portable for field use or static
for practice use
Processing 1-3 minutes Real time
Plate Phosphot screen in cassette Amorphous Silicon Connected
to the computer
Evaluation Computer w/ viewing analysis
software
Computer w/ viewing analysis
software
Archiving To PC archive, external
hard-drive or DVD
To PC archive, external
hard-drive or DVD
Image Processing
Digital Image Processing
• It means the processing of images using a
digital computer.
• All digital radiography imaging modalities
utilize digital image processing as a central
feature of their operations.
• After the raw image data are extracted from
the digital receptor and converted to digital
data, the image must be computer processed
before its display and diagnostic
interpretation
Digital Image Sampling
• In both PSP and FPD, after x-rays have been
converted into electrical signals, these signals
are available for processing and manipulation.
– Preprocessing
• deal with applying corrections to the raw data
– Postprocessing
• address the appearance of the image displayed on a
monitor for viewing and interpretation by a radiologist.
Pre-processing techniques
• Intended to correct the raw data collected
from bad detector elements that would create
problems in the proper functioning of the
detector.
• Flat field image
– May contain artifacts
• Artifacts can be corrected by a pre-processing
technique referred to as flat-fielding.
Post processing
• The image is converted into the “for
presentation” image that has better contrast.
• First step – exposure recogntion,
segmentation of the pre processed raw data,
– algorithms and histogram analysis
• Next step - scaling the histogram
• Last Step - contrast enhancement
Histogram Analysis
• This is an image processing technique
commonly used to identify the edges of the
image and assess the raw data prior to image
display.
• A histogram is a graphic representation of a
data set.
• The stored histogram models have values of
interest (VOI)
Histogram Analysis
• Failure to find the collimation edges can result
in incorrect data collection
• Equally important is centering anatomy to the
center of the imaging plate.
– This ensures that appropriate recorded intensities
are located
Histogram analysis
• In CR imaging, the entire imaging plate is scanned
to extract the image from the photostimulable
phosphor.
• The computer identifies the exposure field and
edges of the images.
• If t least three edges are not identified, all data,
including raw exposure or scatter outside the
field, may be included in the histogram, resulting
in a histogram analysis error.
Exposure Field Recognition
• This is one of the most important pre-processing
method in CR.
• It may also be referred to as exposure data
recognition (Fujifilm Medical Systems) and
segmentation (Carestream).
• The purpose of exposure recognition is to identify
the appropriate raw data values to be used for
image grayscale rendition and to provide an
indication of the average radiation exposure to
the IP CR detector.
Automatic rescaling
• It means that images are produced with
uniform density and contrast, regardless of
the amount of exposure.
• rescaling errors occur for a variety of reasons
and can result in poor-quality digital images.
Look-Up Table
• It provide a method of altering the image to
change the display of the digital image in
various ways.
• It provide the means to alter the brightness
and grayscale of the digital image using
computer algorithms.
Image Enhancement Parameters
• Gradient Processing
– Brightness
– Contrast
• Windowing is intended to change the
contrast and brightness of an image.
– WW is used to change the contrast of the image
– WL is used to change the image brightness
Brightness
• the brightness level displayed on the computer
monitor can be easily altered to visualize the
range of anatomic structures recorded.
• This is accomplished by the windowing function
• The window level (or center) sets the midpoint of
the range of brightness visible in the image.
• There is direct relationship exists between
window level and image brightness
Contrast
• The number of different shades of gray that
can be stored and displayed by a computer
system is termed grayscale.
• Contrast resolution is used to describe the
ability of the imaging system to distinguish
between objects that exhibit similar densities.
• Window width is a control that adjusts the
radiographic contrast
Image Enhancement Parameters
• Frequency Processing
– Smoothing
– Edge Enhancement
Smoothing
• Also known as low pass filtering
• occurs by averaging each pixel’s frequency
with surrounding pixel values to remove
high-frequency noise.
• a postprocessing technique that suppresses
image noise (quantum noise).
• Low-pass filtering is useful for viewing small
structures such as fine bone tissues.
Edge Enhancement
• Also known as high-pass filtering
• It occurs when fewer pixels in the
neighborhood are included in the signal
average.
• a postprocessing technique that improves the
visibility of small high-contrast structures.
• High-pass filtering is useful for enhancing large
structures like organs and soft tissues
Exposure Indicators
• It provides a numeric value indicating the level of
radiation exposure to the digital IR.
• In CR, the exposure indicator value represents the
exposure level to the imaging plate, and the
values are vendor specific.
• it is a useful tool to address the problem of
“exposure creep”
• Exposure indicators have been standardized, and
details of the International Electrotechnical
Commission (IEC) and the AAPM of such
standardized exposure indicator
Dose Area product
• It is the quantity that reflects not only the
dose but also the volume of tissue irradiated.
• It is an indicator of risk and is expressed as
cGy-cm^2
• DAP increases with increasing field size
– Directly proportional to each other
Vendor Specific Exposure Indicator
• Fuji and Konica use sensitivity (S) numbers, and
the value is inversely related to the exposure to
the plate.
• Carestream (Kodak) uses exposure index (EI)
numbers; the value is directly related to the
exposure to the plate, and the changes are
logarithmic expressions
• Agfa uses log median (lgM) numbers; the value is
directly related to exposure to the plate, and
changes are also logarithmic expressions
Basic Principles of Digital
Radiography
Digital Radiography
• It is any image acquisition process that
produces an electronic image that can be
viewed and manipulated on a computer.
• This means both computed radiography and
digital radiography
• In radiology, the term was first used in 1970’s
for CT.
Digital Radiography
Digital Image Characteristics
• A digital image is a matrix of picture elements
or pixels.
– A matrix is a box of cells with numeric value
arranged in rows and columns.
– The numeric value represents the level of
brightness or intensity at that location in the
image.
– An image is formed by a matrix of pixels. The size
of the matrix is described by the number of pixels
in the rows and columns
Digital image Characteristic
Picture Elements or Pixels
• Each pixel in the matrix is capable of representing a
wide range of shades of gray from white to black.
• Pixel Pitch - distance between center of one pixel and
the center of an adjacent pixel.
• Bit depth
– the number of bits per pixel that determines the shade of
the pixel
– bit depth is expressed as 2 to the power of n
– Most digital radiography systems use an 8, 10, or 12 bit
depth
• The level of gray will be a determining factor in the
overall quality of the image.
Picture Elements or Pixels
Pixel Pitch
Pixel
Pixel Pitch
10 mm 5 mm
30
px
30
px
60
px
60
px
Matrix
• A square arrangement of numbers of columns
and rows
• In Digital Imaging, it corresponds to discrete
values.
• Each box within the matrix corresponds to
– A specific location in the image
– A specific area of the patient’s tissue.
Field of View
• It describes how much of the patient is
imaged in the matrix.
• The larger the FOV, the greater the amount of
body part is included in the image
• The matrix size and the FOV are independent.
– Changes in either the FOV or the matrix size will
change the pixel size.
Relationship of FOV, Matrix size and
Pixel Size
• If the FOV increases and the matrix size
remains the same, the pixel size increases.
– FOV and pixel size is directly proportional to each
other
• If the FOV remains the same and the matrix
size changes, the pixel size changes
– Matrix size and pixel size is inversely proportional
to each other.
Matrix size and Imaging Plate
• CR equipment vary in the method of sampling
IPs of different sizes.
• If the spatial resolution is fixed, the image
matrix size is simply proportional to the IP
size.
• If the matrix size is fixed, changing the size of
the IP would affect the spatial resolution of
the digital image.
Detective Quantum Efficiency
• It is a measure of how efficient a digital
detector can convert the X-rays collected from
the patient into a useful image.
• The DQE for CR is much better than for
film-screen systems.
• The DQE for a perfect digital detector is 1 or
100%
Exposure Index
• The exposure index is a measure of the
amount of exposure on the image receptor.
• In screen film radiography it is clear if the
image receptor is too bright or too dark.
• In digital radiography the image brightness
can be altered.
Spatial Resolution
• The ability of the imaging system to allow two
adjacent structures to be visualized as being
separate or distinctness of an image to the
image.
• Spatial frequency
– It describes the spatial resolution of the image
and is expressed in line pairs/mm
– It does not refer to the size of the image but to
the line pair
Spatial Resolution
• Modular Transfer function
– Measures the ability of the system to preserve
signal contrast.
– Ideal expression of digital detector image
resolution
– Higher MTF values with Higher Spatial
Frequencies will show better spatial resolution
– Higher MTF values with Low Spatial Frequency will
show better contrast resolution.
Spatial Resolution
• Digital imaging
– described as the ability of an imaging system to
accurately display objects in two dimensions.
• In film/screen imaging the crystal size and
thickness of the phosphor layer determine
resolution; in digital imaging pixel size will
determine resolution.
Contrast Resolution
• It is the ability to distinguish shades of gray
from black to white.
• All digital imaging systems have a greater
contrast resolution than screen film
radiography.
• The principal descriptor for contrast
resolution is called the dynamic range
Dynamic Range
• It refers the number of gray shades that an
imaging system can produce.
• It refers to the range of exposure intensities an
image receptor can accurately detect.
• The dynamic range of digital imaging system is
identified by the bit capacity of each pixel.
• Digital IR has a large exposure latitude (wider
dynamic range)
• Typical digital systems will respond to exposures
as low as 100 µR and as high as 100mR
Signal to Noise Ratio
• It is a method of describing the strength of the
radiation exposure compared with the
amount of noise apparent in a digital image.
• Increasing the SNR improves the quality of the
digital image.
Digital Receptors
DR
Direct Indirect
TFT CCD
Non Scintillation
Layer
Digital Receptors
• Flat Panel Detectors
– solid-state IRs that are constructed with layers in
order to receive the x-ray photons and convert
them to electrical charges for storage and readout
– Signal storage, signal readout, and digitizing
electronics are integrated into the flat panel
device
– TFT array is divided into square detector elements
(DEL), each having a capacitor to store electrical
charges and a switching transistor for readout
Digital Receptors
• Flat Panel Detectors
– Flat panel systems are highly dose-efficient and
provide quicker access to images compared with
CR and film-screen.
– spatial resolution of flat panel receptors is
generally superior to the spatial resolution of CR.
• A system that uses a smaller DEL size has improved
spatial resolution.
Indirect: Thin Flat Panel
Scintillation Layer
Photodiode
Thin Flat Panel Transistor
Indirect: Thin Flat Panel
• Scintillation Layer
• This is the layer wherein x-rays are converted into a small
burst of light energy.
• It is either made from CsI or Gd
• Photodiode Layer
– It converts the incoming light photons into electric
charge.
– Made up of Amorphous Silicon
Indirect: Thin Flat Panel
• TFT layer
– Its is comprised of an array or matrix of digital
elements (DEL)
– Each DEL is comprised of a capture element or
pixel detector which is the active element within
each DEL
Indirect Capture: Thin Film Transistor
Storage Capacitor
Switch
Fill Factor
• The ability of each DEL to produce a high spatial
resolution is designated as the percentage of
active pixel area within each DEL
• It is expressed as Fill factor= Sensing area of the
pixel /Area of the pixel
• The fill factor affects both the spatial resolution
and contrast resolution
• Fill factor, Spatial resolution and contrast
resolution has a direct relationship.
Charged Couple Device
• It was developed in the 1970’s as a highly
sensitive device for military use.
• It is the oldest indirect conversion radiography
system to acquire a digital image.
• It is a silicon-based semiconductor
• CCD has three principal advantages.
– Sensitivity
– Dynamic Range
– Size
Charged Couple Device
• Most chips range from 1 to 2 cm with pixel
sizes of 100um x 100 um
• The image has to be matched to the size of
the CCD, which implies that the image must
be reduced in size.
• CCD technology uses lenses of fiberoptics so
that the image would match the receptor size.
Charge Couple Device
• X-ray interacts with the scintillation material
w/ scintillation material and the signal is
transmitted by lenses or fiberoptics to the
charged couple device.
• During transmission process the lenses
reduces the size of the projected visible light
image to one or more capacitor that convert
light to electric signal.
Structure and Function
• A CCD is made up of a photosensitive receptor
and electronic substrate material in a silicon
chip.
• The chip is made up of poly silicon layer, a
silicon dioxide and a silicon substrate.
Structure and Function
• Detector Elements contains three electrodes
that holds electrons in an electric potential
well.
• These DELs are formed by voltage gates that
at read out are opened and closed like gates
to allow flow of electrons.
Structure and Function
• In collecting the charge on the silicon chips,
there is a need to change the voltage sign on
the electrodes within each DEL
• This is commonly known as the bucket brigade
scheme
• There are issues in using CCD that it can cause
the blooming effect.
Charge Couple Device
Scintillation
Sensor Chip
Complementary Metal Oxide
Semiconductors
• This was developed by NASA
• It is highly efficient and takes up less fill spaces
than charged couple device
• It is a semiconductor that conducts electricity
in some conditions but not others.
– It is a good medium for the control of electrical
current
• Semiconductor materials do not conduct electricity well
on their own .
– Impurities and dopants , added to increase conductivity.
Complementary Metal Oxide
Semiconductors
• Typical semiconductor materials are:
– Antimony, arsenic, boron, carbon, germanium,
silicon, sulfur
– Silicon is the most common semiconductors in
integrated circuits
• Common dopants of silicon
– Gallium arsenide, indium antimonide and oxides
of most metals.
Complementary Metal Oxide
Semiconductors
• When the semiconductors are doped they
become a full-scale conductor with extra
electrons becoming negative charge or positive
charge carriers.
• CMOS image sensors convert light to electrons
that are stored in the capacitor located at each
pixel.
• During readout, the charge is sent across the chip
and read at one corner of the array.
• An ADC turns the pixel value into a digital value.
Non-Scintillation
Semiconductor
Thin Film Transistor
Non Scintillation
• Semiconductor
– Amorphous Selenium
• It is the both the capture element and coupling
element
• the thickness of the amorphous selenium is relatively
high approximately 200 um thick and sandwiched
between charged electrons.
• An electrical field is applied across the selenium layer
to limit lateral diffusion of electrons as they migrate
toward the thin-film transistor array. By this means,
excellent spatial resolution is maintained
Image Processing
Digital Image Processing
 It means the processing of images using a digital computer
 The data collected from the patient during imaging is first converted into digital data (numerical
representation of the patient) for input into a digital computer (input image), and the result of
computer processing is a digital image (output image).
 All digital radiography imaging modalities utilize digital image processing as a central feature of
their operations.
 After the raw image data are extracted from the digital receptor and converted to digital data,
the image must be computer processed before its display and diagnostic interpretation thus
digital image processing can be defined as subjecting numerical representations of objects to a
series of operations in order to obtain a desired result which ultimately is the digital Image.
Digital Image Sampling
 In both PSP and FPD, after x-rays have been converted into electrical signals, these signals are
available for processing and manipulation.
Preprocessing
 Deals with applying corrections to the raw data
 This techniques are intended to correct the raw data collected from bad detector elements that
would create problems in the proper functioning of the detector.
 Flat-field image – image that is immediately obtained initially from the detector.
o It may contain artifact due to the bad detector elements
 It can be corrected through flat-fielding
o It is a correction process is popularly referred to as system calibration.
o it is an essential requirement to ensure detector performance integrity.
Postprocessing
 Address the appearance of the image displayed on a monitor for viewing and interpretation by a
radiologist.
 The image is converted into the “for presentation” image that has better contrast.
 First step – exposure recogntion, segmentation of the pre processed raw data.
o These two steps find the image data by using image processing algorithms and
histogram analysis to identify the minimal and maximal useful values according to the
histogram-specific shape generated by the anatomy
 Next step - scaling the histogram
o This is based on the exposure falling on the detector to correct under or overexposure.
 Last Step - contrast enhancement
o This is where the adjusted or scaled raw data values are mapped to the “for
presentation” values to display an image with optimum contrast and brightness
Histogram
 A histogram is a graphic representation of a data set.
 A data set includes all the pixel values that represent the image before edge detection and
rescaling.
 The graph represents the number of digital pixel values versus the relative prevalence of the
pixel values in the image.
 The x-axis represents the amount of exposure, and the y-axis represents the incidence of pixels
for each exposure level.
 The stored histogram models have values of interest (VOI)
 Value of Interest (VOI) – determines the range of the histogram data set that should be
included in the displayed image
Histogram Analysis
 It is the process where the computer analyzes the histogram using processing algorithms and
compares it with a pre-established histogram specific to the anatomic part being imaged.
 This is an image processing technique commonly used to identify the edges of the image and
assess the raw data prior to image display.
 All four edges of a collimated field should be recognized but in the event that atleast three
edges are not identified, all data, including raw exposure or scatter outside the field, may be
included in the histogram, resulting in a histogram analysis error.
o Histogram analysis error would result if there is a failure to find the collimation edges
can result in incorrect data collection.
o This is why centering and the alignment of the anatomy to the imaging plate is
important since histogram analysis is only performed on the data within the exposure
field.
 It ensures that appropriate recorded intensities are located
 Misalignment may cause an error and may lead to incorrect exposure indicators
Exposure Field Recognition
 This is one of the most important pre-processing method in CR.
 It may also be referred to as exposure data recognition (Fujifilm Medical Systems) and
segmentation (Carestream).
 The purpose of exposure recognition is to identify the appropriate raw data values to be used
for image grayscale rendition and to provide an indication of the average radiation exposure to
the IP CR detector.
o the collimation edges or boundaries are detected, and anatomical structures that should
be displayed in the image are identified using specific algorithms
Automatic rescaling
 It means that images are produced with uniform density and contrast, regardless of the amount
of exposure.
 This is employed to maintain consistent image brightness despite overexposure or
underexposure of the IR.
o The computer rescales the image based on the comparison of the histograms, which is
actually a process of mapping the grayscale to the VOI to present a specific display of
brightness.
 Rescaling errors occur for a variety of reasons and can result in poor-quality digital images.
Look-Up Table
 It provides a method of altering the image to change the display of the digital image in various
ways and a means to alter the brightness and grayscale of the digital image using computer
algorithms.
o They are also sometimes used to reverse or invert image grayscale.
 Lookup tables provide the means to alter the original pixel values to improve the brightness and
contrast of the image
 Digital radiographic imaging systems utilize a wide range of LUTs stored in the system, for the
different types of clinical examinations.
o The technologist should therefore select the appropriate LUT to match the part being
imaged.
Windowing
 It is intended to change the contrast and brightness of an image.
 Window Width is used to change the contrast of the image
 Window Level or center is used to change the image brightness
o It sets the midpoint of the range of brightness visible in the image.
Brightness
 the brightness level displayed on the computer monitor can be easily altered to visualize the
range of anatomic structures recorded.
 This is accomplished by the windowing function thru changing the window level (center)
o Changing the window level on the display monitor allows the image brightness to be
increased or decreased throughout the entire range.
o A high pixel value could represent a volume of tissue that attenuated fewer x-ray
photons and is displayed as a decreased brightness level.
 Moving the window level up to a high pixel value increases visibility of the
darker anatomic regions (e.g., lung fields) by increasing overall brightness on the
display monitor.
o A low pixel value represents a volume of tissue that attenuates more x-ray photons and
is displayed as increased brightness.
 To visualize better an anatomic region represented by a low pixel value, one
would decrease the window level to decrease the brightness on the display
monitor.
 Relationship: A direct relationship exists between window level and image brightness on the
display monitor.
o Increasing the window level increases the image brightness; decreasing the window
level decreases the image brightness.
Contrast
 The number of different shades of gray that can be stored and displayed by a computer system
is termed grayscale.
 Contrast resolution is used to describe the ability of the imaging system to distinguish between
objects that exhibit similar densities.
o The contrast resolution of a pixel is determined by the bit depth or number of bits
which affects the number of shades of gray available for image display
 Window width is a control that adjusts the radiographic contrast
 In digital imaging, an inverse relationship exists between window width and image contrast.
o A narrow (decreased) window width displays higher radiographic contrast, whereas a
wider (increased) window width displays lower radiographic contrast.
Smoothing
 Also known as low pass filtering
 It occurs by averaging each pixel’s frequency with surrounding pixel values to remove high-
frequency noise.
 a postprocess ing technique that suppresses image noise (quantum noise).
o the output image noise is reduced, and the image sharpness is compromised
 Low-pass filtering is useful for viewing small structures such as fine bone tissues.
Edge Enhancement
 Also known as high-pass filtering
 The high-pass filter suppresses the low frequencies, and the result is a much sharper image than
the original.
o Suppressing frequencies, also known as masking
 result in the loss of small details
o If too much Edge enhancement is used, it produces image noise and creates the “halo”
effect in your image.
 It occurs when fewer pixels in the neighborhood are included in the signal average.
o The smaller the neighborhood, the greater the enhancement.
 A post processing technique that improves the visibility of small high-contrast structures.
 High-pass filtering is useful for enhancing large structures like organs and soft tissues
Exposure Indicators
 It provides a numeric value indicating the level of radiation exposure to the digital IR.
 In CR, the exposure indicator value represents the exposure level to the imaging plate, and the
values are vendor specific.
 it is a useful tool to address the problem of “exposure creep”
o Exposure creep is the use a higher exposure than is normally required for a particular
examination.
 Exposure indicators have been standardized, and details of the International Electrotechnical
Commission (IEC) and the AAPM of such standardized exposure indicator
 If the exposure indicator value is within the acceptable range, adjustments can be made for
contrast and brightness with postprocessing functions, and this will not degrade the image.
However, if the exposure is outside of the acceptable range, attempting to adjust the image
data with postprocessing functions would not correct for improper receptor exposure and may
result in noisy or suboptimal images that should not be submitted for interpretation.
Vendor Specific Exposure Indicator
 Fuji and Konica use sensitivity (S) numbers, and the value is inversely related to the exposure to
the plate.
o a low exposure will result in a high S-number, and a high exposure will result in a low S-
number.
o S-number can be thought of as being equivalent to the speed of the IP
 If the exposure is low, the speed is increased hence the S-number is large and
the image will be noisy
 If the exposure is high, the speed will be decreased and the image is very good,
but at the expense of higher dose to the patient.
 Carestream (Kodak) uses exposure index (EI) numbers; the value is directly related to the
exposure to the plate, and the changes are logarithmic expressions
o a high exposure will result in a high EI and a low exposure will generate in a low EI.
 Agfa uses log median (lgM) numbers; the value is directly related to exposure to the plate, and
changes are also logarithmic expressions.
o The value is directly related to exposure to the plate, and changes are also logarithmic
expressions.
Dose Area product
 It is the quantity that reflects not only the dose but also the volume of tissue irradiated.
 It is used to monitor the radiation output and dose to the patient, per volume of tissue
irradiated
 It is by a radiolucent measuring device positioned near the x-ray source, below the collimator
and in front of the patient.
 It is an indicator of risk and is expressed as cGy-cm^2
 There is no standard DAP for digital radiography yet
 DAP and field size - Directly proportional to each other
o Increasing the field size would increase DAP thus increasing risk
 It increases the risk because small amount of tissue is exposed.
o Decreasing the field size would decrease DAP thus decreasing risk
 It decreases the risk because small amount of tissue is exposed.
Basic Principles of Digital Radiography
Digital Radiography
• It is any image acquisition process that produces an electronic image that can be viewed and
manipulated on a computer.
• This means both computed radiography and digital radiography
• In radiology, the term was first used in 1970’s for CT.
Digital Image Characteristics
• A digital image is a matrix of picture elements or pixels.
– A matrix is a box of cells with numeric value arranged in rows and columns.
– The numeric value represents the level of brightness or intensity at that location in the
image.
– An image is formed by a matrix of pixels. The size of the matrix is described by the
number of pixels in the rows and columns.
Picture Elements or Pixels
• Each pixel in the matrix is capable of representing a wide range of shades of gray from white to
black.
• It is the basic unit of a digital image
• Pixel Size = Field of view ( ) FOV /matrix size
• Measured in x and y direction
• Each pixel contains a specific value, the value contained in each pixel represents the part of a
certain anatomy of the part being imaged.
– This value is because of the process of the Analog to Digital Converter which is called
(quantization)
Pixel Density
• It refers to the number of pixels present on a digital image
• Increasing pixel density would increase the resolution of our digital image
Pixel Pitch
• It is the distance between center of one pixel and the center of an adjacent pixel.
• Pixel Pitch Determines the pixel density of the digital image
– Rationale: Decreasing pixel pitch would increase pixel density. Increasing pixel pitch
would decrease pixel density.
• Relationship: Increasing the pixel density and decreasing the pixel pitch increases spatial
resolution. Decreasing pixel density and increasing pixel pitch decreases spatial resolution.
Bit depth
• The number of bits per pixel that determines the shade of the pixel
• Bit depth is expressed as 2 to the power of n
• Most digital radiography systems use an 8, 10, or 12 bit depth
• This is the number of bits that determines the amount of precision in digitizing the analog signal
and Therefore the number of shades of gray that can be displayed in the image.
• It is determined by the analog to-digital converter
• Relationship: Increasing the number of shades of gray available to display on a digital image
improves its contrast resolution.
o An image with increased contrast resolution increases the visibility of recorded detail
and the ability to distinguish among small anatomic areas of interest.
Matrix
• A square arrangement of numbers of columns and rows
• In Digital Imaging, it corresponds to discrete values.
• Each box within the matrix corresponds to
– A specific location in the image
– A specific area of the patient’s tissue.
Field of View
• It describes how much of the patient is imaged in the matrix.
• The larger the FOV, the greater the amount of body part is included in the image
• The matrix size and the FOV are independent.
– Changes in either the FOV or the matrix size will change the pixel size.
Relationship of FOV, Matrix size and Pixel Size
• Field of View and Pixel Size = Directly proportional to each other
– If the FOV increases and the matrix size remains the same, the pixel size increases
• Matrix Size and Pixel Size = Inversely proportional to each other
– If the FOV remains the same and the matrix size changes, the pixel size changes
• Pixel Size and Spatial Resolution = Inversely proportional to each other
– If Pixel size decreases Spatial resolution increases.
Matrix size and Imaging Plate.
• If the spatial resolution is fixed, the image matrix size is simply proportional to the IP size
– Rationale: A larger IP has a larger matrix to maintain spatial resolution.
• If the matrix size is fixed, changing the size of the IP would affect the spatial resolution of the
digital image. For example decreasing the size of the IP from 14 x 17 to 10 x 12 would increase
spatial resolution.
– Rationale: Spatial resolution is improved because in order to maintain the same matrix
size and number of pixels, the pixels must be smaller in size. The smaller the pixels the
greater our spatial resolution.
• In summary: For a fixed matrix size CR system, using a smaller IP for a given field of view (FOV)
results in improved spatial resolution of the digital image. Increasing the size of the IP for a
given FOV results in decreased spatial resolution.
Detective Quantum Efficiency
• It is a measure of how efficient a digital detector can convert the X-rays collected from the
patient into a useful image.
• It is a measure of the efficiency and fidelity with which the detector can perform this task.
– Note: DQE also takes into consideration not only the signal-to-noise ratio (SNR) but also
the system noise
• The DQE for CR is much better than for film-screen systems.
• The DQE for a perfect digital detector is 1 or 100% meaning there was no loss of information.
• Relationship: As spatial frequencies increase, the DQE decreases rapidly
Exposure Index
• The exposure index is a measure of the amount of exposure on the image receptor.
• In screen film radiography it is clear if the image receptor is too bright or too dark.
• In digital radiography the image brightness can be altered.
Spatial Resolution
• The ability of the imaging system to allow two adjacent structures to be visualized as being
separate or distinctness of an image to the image.
• It refers to the smallest object that can be detected in an image and is the term typically used in
digital imaging.
• It is described as the ability of an imaging system to accurately display objects in two
dimensions.
• Note: In film/screen imaging the crystal size and thickness of the phosphor layer determine
resolution; in digital imaging pixel size will determine resolution.
Spatial frequency
• It describes the spatial resolution of the image and is expressed in line pairs/mm
• It does not refer to the size of the image but to the line pair
• An imaging system with higher spatial frequency has better spatial resolution
Modular Transfer function
• It is a complex mathematical function that measures the ability of the detector to transfer its
spatial resolution characteristics to the image.
• Measures the ability of the system to preserve signal contrast.
• An MTF of 1 represents a perfect transfer of spatial and contrast information
• Ideal expression of digital detector image resolution
• Higher MTF values with Higher Spatial Frequencies (such as small objects) will show better
spatial resolution
• Higher MTF values with Low Spatial Frequency (such as larger objects) will show better contrast
resolution.
Contrast Resolution
• It is the ability to distinguish shades of gray from black to white.
• It is used to describe the ability of an imaging receptor to distinguish between objects having
similar subject contrast.
• All digital imaging systems have a greater contrast resolution than screen film radiography.
• The principal descriptor for contrast resolution is called the dynamic range
Dynamic Range
• It refers the number of gray shades that an imaging system can produce.
• It refers to the ability of the detector to capture accurately the range of photon intensities that
exit the patient.
• The dynamic range of digital imaging system is identified by the bit capacity of each pixel.
• Digital IR has a large exposure latitude (wider dynamic range)
– It means that a small degree of underexposure or overexposure would still result in
acceptable image quality.
– It does not mean a quality image is always created.
• Typical digital systems will respond to exposures as low as 100 µR and as high as 100mR
Exposure Latitude
• The range of exposure techniques that provides quality image at an appropriate patient
exposure.
• This is one of the biggest advantage of CR/DR because of its exposure latitude.
o It eliminates the possibility of retakes.
• Over exposure of 500 % and 80% of under exposure is recoverable.
Signal to Noise Ratio
• It is a method of describing the strength of the radiation exposure compared with the amount of
noise apparent in a digital image.
• Increasing the SNR improves the quality of the digital image.
– Note: Increasing the SNR means that the strength of the signal is high compared with
the amount of noise, and therefore image quality is improved.
• Image noise limits contrast resolution of the digital image.
Digital Receptors
Indirect capture and Direct Capture
• Indirect capture converts the incoming x-ray photons first to light then electrical signal to the
final digital image.
• Direct capture converts the incoming x-ray photons directly to electrical signals.
o Note: Both of them use Flat Panel Detectors.
Flat Panel Detectors
• Solid-state IRs that are constructed with layers in order to receive the x-ray photons and convert
them to electrical charges for storage and readout .
• Signal storage, signal readout, and digitizing electronics are integrated into the flat panel device.
• The first layer is composed of the x-ray converter, the second layer houses the thin-film
transistor (TFT) array, and the third layer is a glass substrate.
• TFT array is divided into square detector elements (DEL), each having a capacitor to store
electrical charges and a switching transistor for readout
• Flat panel systems are highly dose-efficient and provide quicker access to images compared with
CR and film-screen.
• The Spatial resolution of flat panel receptors is generally superior to the spatial resolution of CR.
o Note: A system that uses a smaller DEL size has improved spatial resolution.
Indirect: Thin Flat Panel
• It is composed of three layers: Scintillation Layer, Photodiode layer and TFT array
Scintillation Layer
• This is the layer wherein x-rays are converted into a small burst of light energy.
• It is either made from Cesium Iodide or Gadolinium Oxysulfides
– Cesium Iodide is preferred material of the Scintillation layer because of its structure.
Cesium iodide has a needlike structure (structured phosphor) that produces more
focused light beam that increases spatial resolution compared to your gadolinium
oxysulfide that has a more turbid structure (powdered particles) that can produce
lateral spreading of light decreasing the spatial resolution of the image. If cesium iodide
ang ginamit as the scintillation layer, it is called the structured scintillator and if
gadolinium its unstructured scintillator.
Photodiode Layer
• This layer converts the incoming light photons into electric charge.
• It is usually made made up of Amorphous Silicon (Amorphous = “without shape or form”)
• The process of making this layer is known as plasma-enhanced chemical vapor deposition
where the amorphous silicon are deposited in a glass substrate because Amorphous silicon is a
silicon that is not in crystalline but is a fluid that can be painted onto a supporting surface such
as your glass substrate.
TFT layer
• The electrical charges are temporarily stored by capacitors in the TFT array before being
digitized and processed in the computer.
• It’s is comprised of an array or matrix of digital elements (DEL)
• Detector elementshas three components the TFT, the capacitor, and the sensing area.
o Sensing area receives the data from the layer above it that captures X-rays that are
converted to light (indirect flat-panel detectors) or electrical charges (direct flat-panel
detectors).
o Capacitor – temporarily store the electrical charge
o TFT/Switch – acts to open and close the analog signal leaving each del.
• Each DEL is comprised of a capture element or pixel detector which is the active element within
each DEL
Fill Factor
• This an important feature of the pixel in the flat-panel TFT digital detector active matrix array.
• It is the ability of each DEL to produce a high spatial resolution is designated as the percentage
of active pixel area within each DEL.
• It is expressed as Fill factor= Sensing area of the pixel /Area of the pixel
• The larger the sensing area of each detector element, the more radiation/light can be detected
thus the greater the amount of signal being generated.
• The fill factor affects both the spatial resolution and contrast resolution
– Rationale: Detectors with high fill factors (large sensing areas) will provide better spatial
and contrast resolution than detectors with low fill factors (small sensing area).
Charged Couple Device
• It was developed in the 1970’s as a highly sensitive device for military use.
• It is the oldest indirect conversion radiography system to acquire a digital image.
• It is a silicon-based semiconductor
• CCD has three principal advantages.
o Sensitivity - has a high sensitivity to respond to very low levels of visible light
o Dynamic Range - detects wide range of light intensity from very dim light to intense light
o Size - 1 to 2 cm with pixel sizes of 100um x 100 um
• The image has to be matched to the size of the CCD, which implies that the image must be
reduced in size using fiberoptics or lense system.
• X-ray interacts with the scintillation material w/ scintillation material and the signal is
transmitted by lenses or fiberoptics to the charged couple device.
• During transmission process the lenses reduces the size of the projected visible light image to
one or more capacitor that convert light to electric signal.
Structure and Function
• A CCD is made up of a photosensitive receptor and electronic substrate material in a silicon chip.
• The chip is made up of poly silicon layer, a silicon dioxide and a silicon substrate.
– Polysilicon is layer coated with a photosensitive material and contains the electron gate.
– Silicon dioxide acts as an insulator
– Silicon Substrate contains the charge storage area and works like a capacitor.
• Detector Elements contains three electrodes that holds electrons in an electric potential well.
• These DELs are formed by voltage gates that at read out are opened and closed like gates to
allow flow of electrons.
• In collecting the charge on the silicon chips, there is a need to change the voltage sign on the
electrodes within each DEL
Bucket brigade scheme
• It refers to the systematic collection of changes on each chip.
• The charge from each pixel in a row is transferred to the next row and subsequently down all
the columns to the final readout row.
• There are issues in using CCD that it can cause the blooming effect.
– Blooming effect is the overfill of detector elements
• This can be fixed by contruction of overflow drains in the detector elements.
Complementary Metal Oxide Semiconductors
• This was developed by NASA
• It is highly efficient and takes up less fill spaces than charged couple device
• It is a semiconductor that conducts electricity in some conditions but not others.
– It is a good medium for the control of electrical current
• Semiconductor materials do not conduct electricity well on their own .
– Impurities and dopants , added to increase conductivity.
• Typical semiconductor materials are:
– Antimony, arsenic, boron, carbon, germanium, silicon, sulfur
– Silicon is the most common semiconductors in integrated circuits
• Common dopants of silicon
– Gallium arsenide, indium antimonide and oxides of most metals.
• When the semiconductors are doped they become a full-scale conductor with extra electrons
becoming negative charge or positive charge carriers.
• CMOS image sensors convert light to electrons that are stored in the capacitor located at each
pixel.
• CMOS each detector element is isolated from its neighborhood and is directly connected to the
transistor.
• During readout, the charge is sent across the chip and read at one corner of the array.
• An ADC turns the pixel value into a digital value.
Direct Digital Capture/ Non Scintillation
• Semiconductor
– Amorphous Selenium
• It is the both the capture element and coupling element
• the thickness of the amorphous selenium is relatively high approximately 200
um thick and sandwiched between charged electrons.
– It is thick to compensate for the low atomic number of Selenium which
is (34)
• An electrical field is applied across the selenium layer to limit lateral diffusion of
electrons as they migrate toward the thin-film transistor array. By this means,
excellent spatial resolution is maintained
Principles of Computed Radiography
Computed Radiography Terms:
● IP – Imaging Plate
● PD – Photodiode
● PMT – Photomultiplier tube
● PSL – Photostimulable Luminescence
● PSP – Photostimulable phosphor
● SP – Storage Phosphor
● SPS – Storage Phosphor screen
Computed Radiography
● a “cassette-based” system that uses a special solid-state detector plate instead of a film
inside a cassette.
● The use of CR requires the CR cassettes and phosphor plates, the CR readers and
technologist quality control workstation, and a means to view the images, either a
printer or a viewing station.
● The image plate can be reused and erased thousands of times.
Computed Radiography System
Three major components
● Phosphor Imaging Plates or also known as Photostimulable Phosphor Plate or Screens/ Imaging
Plates in general
o To acquire x-ray image projections from your patient
● PIP Reader (scanner)/ CR reader or Phosphor imaging plate reader or also known as Computed
Radiography Reader
o To extract the electronic latent image
PIP – read by a CR Reader- the CR reader is to extract electronic latent image from your imaging plate; it
also removes the use for the darkroom; daylight
● Workstation
o For pre and post processing of the image.
o Can edit everything that you want to improve (ex. Density, contrast, and etc…)
Cassettes
• 14 x 170 commonly used
• 10 x 12
• 8 x 10
• 11 x 14
• 14 x 14
Historical Perspective
● 1973 – George Luckey, a research scientist filed a patent application titles Apparatus and
Method for Producing Images Corresponding to Patterns of High Energy Radiation.
● 1975 – George Luckey patent (USD 3,859,527) was approved and Kodak patented the first
scanned storage phosphor system that gave birth to modern computed radiography.
● 1980’s – a lot of companies applied for a patent in George Luckey’s invention.
● 1983 – Fuji Medical Systems was the first to commercialize and complete the CR system
● Structure and Mechanism
● The CR cassette contains a solid-state plate called a photostimulable storage phosphor imaging
plate (PSP) or (IP) that responds to radiation by trapping energy in the locations where the x-
rays strike.
● The first system consisted of a phosphor storage plate, a reader, and a laser printer to print the
image onto film.
Patent- license to be able to reproduce commercially a product of a specific person.
Structure and Mechanism
Photostimulable Luminescence
● It refers to the emission of light after stimulation of a relevant light source or when exposed to a
different light source
● Characteristics of a imaging plate is to released stored energy within a phosphor by stimulation
of another light source to produce luminescent light signal
● When phosphors inside the imaging plate is stimulated it will produce light again.
● Examples of phosphor’s that exhibit this property – Barium Fluorohalide doped with europium
(BaFBr:Eu or BaFl:Eu)
o Note: Europium is present in only small amount since it allows the electrons to be
trapped more effectively
Imaging Plate
● It is house in a rugged cassette that appears similar to screen- film cassette.
● It is handled in the same manner as a screen film cassette.
● Like the tradional x-ray film it also has several layers.
Imaging plate layers
● Overcoat/ Protective layer: This is a very thin, tough, clear plastic that protects the phosphor
layer from handling trauma. Made up of Fluorinated Polymer material to protect the phosphor
layer.
● Active/ Phosphor layer: This is the active layer. This is the layer of photostimulable phosphor
that traps electrons during exposure. It is typically made of barium fluorohalide phosphors. It
may contain light absorbing dye
o The dye differentially absorbs the stimulating light to prevent as much spread as
possible.
Thickness: 100-250 mm thick
Example: Barium flourohalide and Barium Flourobromide; this are activated by europium; the
europium help efficiently to trap the electrons inside the phosphor layer.
Barium flourobromide: 3-10 mm in size, randomly position and held together by a binder
Imaging plate structure- turbid structure or mud-like structure
● Reflective layer – This is a layer that sends light in a forward direction when released in the
cassette reader. This layer may be black to reduce the spread of stimulating light and the escape
of emitted light.
● Conductive layer: This layer grounds the plate to reduce static electricity problems and to
absorb light to increase sharpness.
● Support layer: This is a semirigid material that provides the imaging sheet with strength and is a
base for coating the other layers. It is made up of Poly Ethylene Teraphtalate
● Backing layer: This is a soft polymer that protects the back of the cassette. The radiation dose
from a CR. Made up of lead foil to protect the IP from backscatter radiation
Imaging Plate
● Cassettes contain barcode label on the cassette or on the imaging plate through a window in a
cassette.
● Label enables technologist to match information with patient identifying barcode.
CR Image Processing
1. When the Photostimulable phosphor or the (PSP) screen is exposed to x-rays, energy will be
absorbed by the phosphor crystals.
2. After the exposure the IP is inserted into a CR reader
3. The IP is processed by a scanning system or reader which
1. Extracts the PSP screen from the cassette
2. Moves the screen across a high intensity scanning laser beam
3. Blue violet light is emitted via PSL
4. Light energy is read by the photomultiplier tube, in which converts the light into an electric
signal.
4. The electronic signal is converted into a digital format for manipulation, enhancement, viewing and
printing if desired.
5. The PSP screen is erased by a bright white light inside the reader, reloaded in the cassette and is
ready for the next exposure.
● The white light dumps all the remaining excess energy traps allowing the plates to be reused
Computed Radiography Acquisition
Latent Image Formation
1. The incident x-ray beam interacts with the photostimulable phosphors that are in the active
layer of the imaging plate.
2. The x-ray energy is absorbed by the phosphor and the absorbed energy excites the europium
atoms, causing ionization to the europium atoms.
3. The electrons are raised to higher energy state and are trapped in a so called phosphor center
in a metastable state
Note:
● The number of trapped electrons per unit area is proportional to intensity of X-rays and these
trapped electrons constitute the latent image.
● Due to the thermal motion the trapped electrons will slowly be liberated from the traps, and so
at room temperature the image should, however, be readable up to 8 hours after exposure.
Visible Image Formation
1. Special CR Cassette is inserted to the CR Reader
2. The CR Reader automatically extracts the imaging plate from the special cassette
3. A finely focused beam of infrared light with a beam diameter of 50 to 100um is directed to the
PSP.
▪ The smaller the diameter the laser beam the greater the spatial resolution of
the system.
4. The energy of the laser light is absorbed at the phosphor centers and the trapped electrons are
released.
5. The released electrons are absorbed by the europium atoms that will release Blue violet Light
from europium (photostimulable luminescence)
6. Analog to Digital Converter (ADC) will converts released light energy to a digital signal.
7. Digital Signal is reconstructed by the computer system with special soft wares into a grayscale
image that can be seen on the monitor or printed.
8. Imaging plate is scanned with a high intensity to remove any excess energy.
The CR Reader
● The CR reader is composed of mechanical, optical and computer modules.
Mechanical Features
● When the CR cassette is inserted into the CR reader, the IP is removed and is fitted to a
precision drive mechanism.
There are two scan directions
● Fast Scan
o The movement of the laser across the imaging plate
o Aka “scan”
● Slow Scan
o The movement of the imaging plate through the reader
o Aka “translation or sub-scan direction”
Optical Features
● Components of the optical subsystem include the laser, beam shaping optics, light collecting
optics, optical filters and a photo detector.
● The laser is used as the source of stimulating light that spreads as it travels to the rotating or
oscillating reflector.
o Laser stands for Light amplification by stimulated emission of radiation
o a wavelength of 633 nm (or 670 to 690 nm for solid state laser diodes)
● The laser /light beam is focused to a reflector by a lens system that keeps the laser diameter
about 100um
o Smaller laser beams are critical to produce images with high spatial resolution.
● Using Special Beam optics allows the shape of the beam to be constant size, shape, speed and
intensity.
o Without Special beam optics the shape of the laser would be more angled the and more
elliptical resulting to differing spatial resolution and inconsistent output signals
● The laser beam is deflected across the IP.
o The laser scans across the imaging plate multiple times is called translation
o Translations speed of the plate must be coordinated with scan direction of the laser or
the spacing of the scan lines will be affected.
● The reader scans the plate with red light in a zigzag or raster pattern
● The emitted light from the IP is channeled into a funnel of fiber optic collection assembly and is
directed at the photo detector, PMT and Photodiode and send it to the ADC.
Computer Control
● The output of the photodetector/photomultplier tube is a time varying analog signal that is
transmitted to a computer system that has multiple functions.
● The analog signal is processed for amplitude scale and compression.
● It shapes the final signal before the image is formed.
● The analog signal is digitized in consideration of proper sampling and quantization.
● The image buffer usually is a hard disc. This is the place where a completed image can be stored
temporarily until it is transferred to a workstation for interpretation or to an archival computer.
Visible Image Formation
1. Special CR Cassette is inserted to the CR Reader/ Digitizer
2. The Digitizer automatically extracts the imaging plate from the special cassette
3. A finely focused beam of infrared light with a beam diameter of 50 to 100um is directed to the
PSP.
4. The energy of the laser light is absorbed at the phosphor centers and the trapped electrons are
released.
5. The released electrons are absorbed by the europium atoms that will release Blue violet Light
from europium (photostimulable luminescence)
6. Analog to Digital Converter (ADC) will converts released light energy to a digital signal.
7. Digital Signal is reconstructed by the computer system with special soft wares into a grayscale
image that can be seen on the monitor or printed.
8. Imaging plate is scanned with a high intensity to remove any excess energy.
Direct Digital Radiography
● Most digital radiography (cassette-less) systems use an x-ray absorber material coupled to a flat
panel detector or a charged coupled device (CCD) to form the image.
● DR uses an array of small solid state detectors to convert incident x-ray photons to directly
forms the digital image.
● DR system is that no handling of a cassette is required as this is a “cassette-less” system.
Direct Digital Radiography
● DR can be divided into two categories: Indirect capture and direct capture.
● Indirect capture digital radiography devices absorb x-rays and convert them into light.
● Direct capture converts the incident x-ray energy directly into an electrical signal.
Both the conventional and CR system uses cassette and the difference between them is whats
inside the cassette
Sa Coventional: RADIOGRAPHIC FILM
CR: REUSABLE IMAGING PLATE
IMAGING PLATES IN GENERAL IS ALSO KNOWN AS
"PSP" - PHOTOSTIMULABLE STORAGE PHOSPHOR PLATE/SCREEN OR IP IMAGING PLATE
"PIP" - PHOSPHOR IMAGING PLATE READER
- REMOVES THE USE OF DARKROOM
14X17 - MOST USED
10X12
8X10
11X14
14X14
RADIOGRAPHIC FILM IS THINNER WHILE IMAGING PLATE IS MORE RIGID IN STRUCTURE
PROTECTIVE LAYER AKA OVERCOAT - USUALLY MADE UP OF FLUORINATED POLYMER
PHOSPHOR LAYER AKA ACTIVE LAYER - THICKNESS IS 100-250 mm thick
example: barium fluorohalide phosphors
barium fluorobromide phosphor
usually activated by europium because it helps to efficiently trap electrons inside of phosphor
layer
PHOSPHOR SIZE: 3-10 MICROMETER IN SIZE AND RANDOMLY POSITIONED BY A BINDER
IT HAS A TURBID STRUCTURE, MUD LIKE STRUCTURE
IT ALSO CONTAINS LIGHT ABSORBING DYE - PREVENTS LIGHT SPREAD MAKING THE DETAIL
MUCH GREATER
REFLECTIVE LAYER
ISOTROPIC EMISSION - IN ALL DIRECTION ANG EMISSION
CONDUCTIVE LAYER - ABSORBS WEAK LIGHT
SUPPORT LAYER AKA BASE - MADE UP OF POLYETHYLENE TEREPHTHALATE
BACKING LAYER - MADE UP OF ALUMINUM
PHOTOSTIMULABLE LUMINESCENCE - "PSL"
ADC ANALOG DIGITAL CONVERTER
PHOSPHOR CENTER OR F CENTER
THE NUMBER OF TRAPPED ELECTRONS PER UNIT AREA IS PROPORTIONAL TO THE INTENSITY OF
XRAYS AT EACH LOCATION
DUE TO THERMAL MOTION THE TRAP ELECTRONS WILL SLOWLY BE LIBERATED OR FREE
8 HRS AFTER EXPOSURE AT ROOM TEMPERATURE TENDENCY MAG FADE ANG IMAGE
PHOTODETECTOR
PHOTOMULTIPLIER TUBE
PHOTODIODE
AMOUNT OF LASER LIGHT THAT SPREAD INCREASES AS THE THICKNESS OF PHOSPHOR LAYER
INCREASES
NEWER SYSTEM SOLID STATE LAYER
WAVELENGTH 633 nanometer; solid state 670 to 690 nanometer in wavelength
process of scanning laser back and forth is called translation coordinated with scan direction of
laser to avoid artifact
laser light has an energy 2 electron volts and light produced is 3 electron volts
smaller light diameter - good spatial resolution
SAMPLING FREQUENCY - DETERMINES HOW OFTEN ANALOG SIGNAL IS PRODUCED IN ITS
DISCRETE DIGITIZED FORM
INCREASE FREQUENCY WILL INCREASE PIXEL DENSITY IN DIGITAL IMAGE THUS IMPROVED THE
SPATIAL RESOLUTION

474291551-Computed-and-Digital-Radiography-Compilation-Complete-merged.pdf

  • 1.
  • 2.
    Computed Radiography • a“cassette-based” system that uses a special solid-state detector plate instead of a film inside a cassette. • The use of CR requires the CR cassettes and phosphor plates, the CR readers and technologist quality control workstation, and a means to view the images, either a printer or a viewing station.
  • 3.
    Computed Radiography System •Three major components • Phosphor Imaging Plates – To acquire x-ray image projections • PIP Reader (scanner) – To extract the electronic latent image • Workstation – For pre and post processing of the image.
  • 4.
    Historical Perspective • 1973– George Luckey, a research scientist filed a patent application titles Apparatus and Method for Producing Images Corresponding to Patterns of High Energy Radiation. • 1975 – George Luckey patent (USD 3,859,527) was approved and Kodak patented the first scanned storage phosphor system that gave birth to modern computed radiography.
  • 5.
    Historical Perspective • 1980’s– a lot of companies applied for a patent in George Luckey’s invention. • 1983 – Fuji Medical Systems was the first to commercialize and complete the CR system • In the early 1990s, CR began to be installed at a much greater rate because of the technological improvements that had occurred in the decade since its introduction. • The first system consisted of a phosphor storage plate, a reader, and a laser printer to print the image onto film.
  • 6.
    Structure and Mechanism •The CR cassette contains a solid-state plate called a photostimulable storage phosphor imaging plate (PSP) or (IP) that responds to radiation by trapping energy in the locations where the x-rays strike.
  • 7.
    Structure and Mechanism •Photostimulable Luminescence • It refers to the emission of light after stimulation of a relevant light source or when exposed to a different light source
  • 8.
    Imaging Plate • Itis house in a rugged cassette that appears similar to screen- film cassette. • It is handled in the same manner as a screen film cassette.
  • 9.
  • 10.
    Imaging plate Protective layer:This is a very thin, tough, clear plastic that protects the phosphor layer from handling trauma.
  • 11.
    Imaging Plate Phosphor layer:This is the active layer. This is the layer of photostimulable phosphor that traps electrons during exposure. It is typically made of barium fluorohalide phosphors.
  • 12.
    Imaging Plate Reflective Layer- This is a layer that sends light in a forward direction when released in the cassette reader. This layer may be black to reduce the spread of stimulating light and the escape of emitted light
  • 13.
    Imaging Plate Conductive layer:This layer grounds the plate to reduce static electricity problems and to absorb light to increase sharpness.
  • 14.
    Imaging Plate Support layer:This is a semirigid material that provides the imaging sheet with strength and is a base for coating the other layers.
  • 15.
    Imaging Plate Backing layer:This is a soft polymer that protects the back of the cassette. The radiation dose from a CR
  • 16.
    Imaging Plate • Cassettescontain barcode label on the cassette or on the imaging plate through a window in a cassette. • Label enables technologist to match information with patient identifying barcode.
  • 17.
    CR Image Processing 1.When the Photostimulable phosphor or the (PSP) screen is exposed to x-rays, energy will be absorbed by the phospor crystals. 2. After the exposure the IP is inserted into a CR reader 3. The IP is processed by a scanning system or reader which 1. Extracts the PSP screen from the cassette 2. Moves the screen across a high intensity scanning laser beam 3. Blue violet light is emitted via PSL 4. Light energy is read by the photomultiplier tube, in which converts the light into an electric signal.
  • 18.
    CR Image Processing 4.The electronic signal is converted into a digital format for manipulation, enhancement, viewing and printing if desired. 5. The PSP screen is erased by a bright white light inside the reader, reloaded in the cassette and is ready for the next exposure. • The white light dumps all the remaining excess energy traps allowing the plates to be reused
  • 19.
  • 20.
    Latent Image Formation 1.The incident x-ray beam interacts with the photostimulable phosphors that are in the active layer of the imaging plate. 2. The x-ray energy is absorbed by the phosphor and the absorbed energy excites the europium atoms. 3. The electrons are raised to higher energy state and are trapped in a so called phosphor center in a metastable state
  • 22.
    The CR Reader •The CR reader is composed of mechanical, optical and computer modules.
  • 23.
    Mechanical Features • Whenthe CR cassette is inserted into the CR reader, the IP is removed and is fitted to a precision drive mechanism. • There are two scan directions – Fast Scan • The movement of the laser across the imaging plate • Aka “scan” – Slow Scan • The movement of the imaging plate through the reader • Aka “translation or subscan direction”
  • 24.
    Optical Features • Componentsof the optical subsystem include the laser, beam shaping optics, light collecting optics, optical filters and a photodetector. • The laser is used as the source of stimulating light that spreads as it travels to the rotating or oscillating reflector.
  • 26.
    Optical Features • Thelaser /light beam is focused to a reflector by a lens system that keeps the laser diameter about 100um • Using Special Beam optics allow the shape of the beam to be constant size, shape, speed and intensity • The laser beam is deflected across the IP. • The reader scans the plate with red light in a zigzag or raster pattern • The emitted light from the IP is channeled into a funnel of fiber optic collection assembly and is directed at the photo detector, PMT and Photodiode or CCD and send it to the ADC.
  • 27.
    Computer Control • Theoutput of the optic collection assembly is a time varying analog signal that is transmitted to a computer system that has multiple functions . • The analog signal is processed for amplitude scale and compression. • It shapes the final signal before the image is formed. • The analog signal is digitized in consideration of proper sampling and quantization.
  • 28.
    Computer Control • Theimage buffer usually is a hard disc. This is the place where a completed image can be stored temporarily until it is transferred to a workstation for interpretation or to an archival computer.
  • 29.
    Visible Image Formation 1.The CR Cassette is inserted to the CR Reader 2. The CR reader automatically extracts the imaging plate from the special cassette 3. A finely focused beam of infrared light with a beam diameter of 50 to 100um is directed to the PSP. 4. The energy of the laser light is absorbed at the phosphor centers and the trapped electrons are released. 5. The released electrons are absorbed by the europium atoms that will release Blue violet Light from europium (photostimulable luminescence)
  • 30.
  • 31.
    Film Screen RadiographyVS Computed Radiography FILM CR EXPOSURE MEDIUM FILM IMAGING PLATE PROCESSING DARK ROOM CONDITIONS AND CHEMISTRY REQUIRED NO DARKROOM CONDITIONS OR CHEMISTRY REQUIRED PROCESSING TIME 8 MINUTES 1-3 MINUTES EVALUATION FILM VIEWER COMPUTER WITH VIEWING ANALYSIS SOFTWARE ARCHIVING FILM ARCHIVE ROOM (HUMIDYD AND TEMPERATURE CONTROLLED PC CLOUD/REMOTE NETWORK SERVER AVAILABILITY UNIQUE MASTER COPY UNLIMITED COPIES WITH POSSIBILITY TO ACCESS TO ANY LOCATION
  • 32.
    Direct Digital Radiography •Most digital radiography (cassette-less) systems use an x-ray absorber material coupled to a flat panel detector or a charged coupled device (CCD) to form the image. • DR uses an array of small solid state detectors to convert incident x-ray photons to directly form the digital image. • DR system is that no handling of a cassette is required as this is a “cassette-less” system.
  • 33.
    Direct Digital Radiography •DR can be divided into two categories: Indirect capture and direct capture. • Direct capture converts the incident x-ray energy directly into an electrical signal. • Indirect capture digital radiography devices absorb x-rays and convert them into light.
  • 37.
    CR DR Cost Inexpensiveto Moderate Expensive Size Portable but generally practice based Portable for field use or static for practice use Processing 1-3 minutes Real time Plate Phosphot screen in cassette Amorphous Silicon Connected to the computer Evaluation Computer w/ viewing analysis software Computer w/ viewing analysis software Archiving To PC archive, external hard-drive or DVD To PC archive, external hard-drive or DVD
  • 39.
  • 40.
    Digital Image Processing •It means the processing of images using a digital computer. • All digital radiography imaging modalities utilize digital image processing as a central feature of their operations. • After the raw image data are extracted from the digital receptor and converted to digital data, the image must be computer processed before its display and diagnostic interpretation
  • 41.
    Digital Image Sampling •In both PSP and FPD, after x-rays have been converted into electrical signals, these signals are available for processing and manipulation. – Preprocessing • deal with applying corrections to the raw data – Postprocessing • address the appearance of the image displayed on a monitor for viewing and interpretation by a radiologist.
  • 43.
    Pre-processing techniques • Intendedto correct the raw data collected from bad detector elements that would create problems in the proper functioning of the detector. • Flat field image – May contain artifacts • Artifacts can be corrected by a pre-processing technique referred to as flat-fielding.
  • 45.
    Post processing • Theimage is converted into the “for presentation” image that has better contrast. • First step – exposure recogntion, segmentation of the pre processed raw data, – algorithms and histogram analysis • Next step - scaling the histogram • Last Step - contrast enhancement
  • 47.
    Histogram Analysis • Thisis an image processing technique commonly used to identify the edges of the image and assess the raw data prior to image display. • A histogram is a graphic representation of a data set. • The stored histogram models have values of interest (VOI)
  • 48.
    Histogram Analysis • Failureto find the collimation edges can result in incorrect data collection • Equally important is centering anatomy to the center of the imaging plate. – This ensures that appropriate recorded intensities are located
  • 50.
    Histogram analysis • InCR imaging, the entire imaging plate is scanned to extract the image from the photostimulable phosphor. • The computer identifies the exposure field and edges of the images. • If t least three edges are not identified, all data, including raw exposure or scatter outside the field, may be included in the histogram, resulting in a histogram analysis error.
  • 51.
    Exposure Field Recognition •This is one of the most important pre-processing method in CR. • It may also be referred to as exposure data recognition (Fujifilm Medical Systems) and segmentation (Carestream). • The purpose of exposure recognition is to identify the appropriate raw data values to be used for image grayscale rendition and to provide an indication of the average radiation exposure to the IP CR detector.
  • 52.
    Automatic rescaling • Itmeans that images are produced with uniform density and contrast, regardless of the amount of exposure. • rescaling errors occur for a variety of reasons and can result in poor-quality digital images.
  • 53.
    Look-Up Table • Itprovide a method of altering the image to change the display of the digital image in various ways. • It provide the means to alter the brightness and grayscale of the digital image using computer algorithms.
  • 56.
    Image Enhancement Parameters •Gradient Processing – Brightness – Contrast • Windowing is intended to change the contrast and brightness of an image. – WW is used to change the contrast of the image – WL is used to change the image brightness
  • 58.
    Brightness • the brightnesslevel displayed on the computer monitor can be easily altered to visualize the range of anatomic structures recorded. • This is accomplished by the windowing function • The window level (or center) sets the midpoint of the range of brightness visible in the image. • There is direct relationship exists between window level and image brightness
  • 60.
    Contrast • The numberof different shades of gray that can be stored and displayed by a computer system is termed grayscale. • Contrast resolution is used to describe the ability of the imaging system to distinguish between objects that exhibit similar densities. • Window width is a control that adjusts the radiographic contrast
  • 62.
    Image Enhancement Parameters •Frequency Processing – Smoothing – Edge Enhancement
  • 63.
    Smoothing • Also knownas low pass filtering • occurs by averaging each pixel’s frequency with surrounding pixel values to remove high-frequency noise. • a postprocessing technique that suppresses image noise (quantum noise). • Low-pass filtering is useful for viewing small structures such as fine bone tissues.
  • 65.
    Edge Enhancement • Alsoknown as high-pass filtering • It occurs when fewer pixels in the neighborhood are included in the signal average. • a postprocessing technique that improves the visibility of small high-contrast structures. • High-pass filtering is useful for enhancing large structures like organs and soft tissues
  • 68.
    Exposure Indicators • Itprovides a numeric value indicating the level of radiation exposure to the digital IR. • In CR, the exposure indicator value represents the exposure level to the imaging plate, and the values are vendor specific. • it is a useful tool to address the problem of “exposure creep” • Exposure indicators have been standardized, and details of the International Electrotechnical Commission (IEC) and the AAPM of such standardized exposure indicator
  • 69.
    Dose Area product •It is the quantity that reflects not only the dose but also the volume of tissue irradiated. • It is an indicator of risk and is expressed as cGy-cm^2 • DAP increases with increasing field size – Directly proportional to each other
  • 70.
    Vendor Specific ExposureIndicator • Fuji and Konica use sensitivity (S) numbers, and the value is inversely related to the exposure to the plate. • Carestream (Kodak) uses exposure index (EI) numbers; the value is directly related to the exposure to the plate, and the changes are logarithmic expressions • Agfa uses log median (lgM) numbers; the value is directly related to exposure to the plate, and changes are also logarithmic expressions
  • 74.
    Basic Principles ofDigital Radiography
  • 75.
    Digital Radiography • Itis any image acquisition process that produces an electronic image that can be viewed and manipulated on a computer. • This means both computed radiography and digital radiography • In radiology, the term was first used in 1970’s for CT.
  • 76.
  • 77.
    Digital Image Characteristics •A digital image is a matrix of picture elements or pixels. – A matrix is a box of cells with numeric value arranged in rows and columns. – The numeric value represents the level of brightness or intensity at that location in the image. – An image is formed by a matrix of pixels. The size of the matrix is described by the number of pixels in the rows and columns
  • 78.
  • 79.
    Picture Elements orPixels • Each pixel in the matrix is capable of representing a wide range of shades of gray from white to black. • Pixel Pitch - distance between center of one pixel and the center of an adjacent pixel. • Bit depth – the number of bits per pixel that determines the shade of the pixel – bit depth is expressed as 2 to the power of n – Most digital radiography systems use an 8, 10, or 12 bit depth • The level of gray will be a determining factor in the overall quality of the image.
  • 80.
    Picture Elements orPixels Pixel Pitch Pixel
  • 81.
    Pixel Pitch 10 mm5 mm 30 px 30 px 60 px 60 px
  • 84.
    Matrix • A squarearrangement of numbers of columns and rows • In Digital Imaging, it corresponds to discrete values. • Each box within the matrix corresponds to – A specific location in the image – A specific area of the patient’s tissue.
  • 85.
    Field of View •It describes how much of the patient is imaged in the matrix. • The larger the FOV, the greater the amount of body part is included in the image • The matrix size and the FOV are independent. – Changes in either the FOV or the matrix size will change the pixel size.
  • 86.
    Relationship of FOV,Matrix size and Pixel Size • If the FOV increases and the matrix size remains the same, the pixel size increases. – FOV and pixel size is directly proportional to each other • If the FOV remains the same and the matrix size changes, the pixel size changes – Matrix size and pixel size is inversely proportional to each other.
  • 87.
    Matrix size andImaging Plate • CR equipment vary in the method of sampling IPs of different sizes. • If the spatial resolution is fixed, the image matrix size is simply proportional to the IP size. • If the matrix size is fixed, changing the size of the IP would affect the spatial resolution of the digital image.
  • 88.
    Detective Quantum Efficiency •It is a measure of how efficient a digital detector can convert the X-rays collected from the patient into a useful image. • The DQE for CR is much better than for film-screen systems. • The DQE for a perfect digital detector is 1 or 100%
  • 89.
    Exposure Index • Theexposure index is a measure of the amount of exposure on the image receptor. • In screen film radiography it is clear if the image receptor is too bright or too dark. • In digital radiography the image brightness can be altered.
  • 91.
    Spatial Resolution • Theability of the imaging system to allow two adjacent structures to be visualized as being separate or distinctness of an image to the image. • Spatial frequency – It describes the spatial resolution of the image and is expressed in line pairs/mm – It does not refer to the size of the image but to the line pair
  • 92.
    Spatial Resolution • ModularTransfer function – Measures the ability of the system to preserve signal contrast. – Ideal expression of digital detector image resolution – Higher MTF values with Higher Spatial Frequencies will show better spatial resolution – Higher MTF values with Low Spatial Frequency will show better contrast resolution.
  • 94.
    Spatial Resolution • Digitalimaging – described as the ability of an imaging system to accurately display objects in two dimensions. • In film/screen imaging the crystal size and thickness of the phosphor layer determine resolution; in digital imaging pixel size will determine resolution.
  • 95.
    Contrast Resolution • Itis the ability to distinguish shades of gray from black to white. • All digital imaging systems have a greater contrast resolution than screen film radiography. • The principal descriptor for contrast resolution is called the dynamic range
  • 96.
    Dynamic Range • Itrefers the number of gray shades that an imaging system can produce. • It refers to the range of exposure intensities an image receptor can accurately detect. • The dynamic range of digital imaging system is identified by the bit capacity of each pixel. • Digital IR has a large exposure latitude (wider dynamic range) • Typical digital systems will respond to exposures as low as 100 µR and as high as 100mR
  • 99.
    Signal to NoiseRatio • It is a method of describing the strength of the radiation exposure compared with the amount of noise apparent in a digital image. • Increasing the SNR improves the quality of the digital image.
  • 100.
    Digital Receptors DR Direct Indirect TFTCCD Non Scintillation Layer
  • 102.
    Digital Receptors • FlatPanel Detectors – solid-state IRs that are constructed with layers in order to receive the x-ray photons and convert them to electrical charges for storage and readout – Signal storage, signal readout, and digitizing electronics are integrated into the flat panel device – TFT array is divided into square detector elements (DEL), each having a capacitor to store electrical charges and a switching transistor for readout
  • 103.
    Digital Receptors • FlatPanel Detectors – Flat panel systems are highly dose-efficient and provide quicker access to images compared with CR and film-screen. – spatial resolution of flat panel receptors is generally superior to the spatial resolution of CR. • A system that uses a smaller DEL size has improved spatial resolution.
  • 105.
    Indirect: Thin FlatPanel Scintillation Layer Photodiode Thin Flat Panel Transistor
  • 106.
    Indirect: Thin FlatPanel • Scintillation Layer • This is the layer wherein x-rays are converted into a small burst of light energy. • It is either made from CsI or Gd • Photodiode Layer – It converts the incoming light photons into electric charge. – Made up of Amorphous Silicon
  • 107.
    Indirect: Thin FlatPanel • TFT layer – Its is comprised of an array or matrix of digital elements (DEL) – Each DEL is comprised of a capture element or pixel detector which is the active element within each DEL
  • 109.
    Indirect Capture: ThinFilm Transistor Storage Capacitor Switch
  • 110.
    Fill Factor • Theability of each DEL to produce a high spatial resolution is designated as the percentage of active pixel area within each DEL • It is expressed as Fill factor= Sensing area of the pixel /Area of the pixel • The fill factor affects both the spatial resolution and contrast resolution • Fill factor, Spatial resolution and contrast resolution has a direct relationship.
  • 111.
    Charged Couple Device •It was developed in the 1970’s as a highly sensitive device for military use. • It is the oldest indirect conversion radiography system to acquire a digital image. • It is a silicon-based semiconductor • CCD has three principal advantages. – Sensitivity – Dynamic Range – Size
  • 113.
    Charged Couple Device •Most chips range from 1 to 2 cm with pixel sizes of 100um x 100 um • The image has to be matched to the size of the CCD, which implies that the image must be reduced in size. • CCD technology uses lenses of fiberoptics so that the image would match the receptor size.
  • 115.
    Charge Couple Device •X-ray interacts with the scintillation material w/ scintillation material and the signal is transmitted by lenses or fiberoptics to the charged couple device. • During transmission process the lenses reduces the size of the projected visible light image to one or more capacitor that convert light to electric signal.
  • 117.
    Structure and Function •A CCD is made up of a photosensitive receptor and electronic substrate material in a silicon chip. • The chip is made up of poly silicon layer, a silicon dioxide and a silicon substrate.
  • 119.
    Structure and Function •Detector Elements contains three electrodes that holds electrons in an electric potential well. • These DELs are formed by voltage gates that at read out are opened and closed like gates to allow flow of electrons.
  • 120.
    Structure and Function •In collecting the charge on the silicon chips, there is a need to change the voltage sign on the electrodes within each DEL • This is commonly known as the bucket brigade scheme • There are issues in using CCD that it can cause the blooming effect.
  • 122.
  • 123.
    Complementary Metal Oxide Semiconductors •This was developed by NASA • It is highly efficient and takes up less fill spaces than charged couple device • It is a semiconductor that conducts electricity in some conditions but not others. – It is a good medium for the control of electrical current • Semiconductor materials do not conduct electricity well on their own . – Impurities and dopants , added to increase conductivity.
  • 124.
    Complementary Metal Oxide Semiconductors •Typical semiconductor materials are: – Antimony, arsenic, boron, carbon, germanium, silicon, sulfur – Silicon is the most common semiconductors in integrated circuits • Common dopants of silicon – Gallium arsenide, indium antimonide and oxides of most metals.
  • 125.
    Complementary Metal Oxide Semiconductors •When the semiconductors are doped they become a full-scale conductor with extra electrons becoming negative charge or positive charge carriers. • CMOS image sensors convert light to electrons that are stored in the capacitor located at each pixel. • During readout, the charge is sent across the chip and read at one corner of the array. • An ADC turns the pixel value into a digital value.
  • 127.
  • 128.
    Non Scintillation • Semiconductor –Amorphous Selenium • It is the both the capture element and coupling element • the thickness of the amorphous selenium is relatively high approximately 200 um thick and sandwiched between charged electrons. • An electrical field is applied across the selenium layer to limit lateral diffusion of electrons as they migrate toward the thin-film transistor array. By this means, excellent spatial resolution is maintained
  • 129.
    Image Processing Digital ImageProcessing  It means the processing of images using a digital computer  The data collected from the patient during imaging is first converted into digital data (numerical representation of the patient) for input into a digital computer (input image), and the result of computer processing is a digital image (output image).  All digital radiography imaging modalities utilize digital image processing as a central feature of their operations.  After the raw image data are extracted from the digital receptor and converted to digital data, the image must be computer processed before its display and diagnostic interpretation thus digital image processing can be defined as subjecting numerical representations of objects to a series of operations in order to obtain a desired result which ultimately is the digital Image. Digital Image Sampling  In both PSP and FPD, after x-rays have been converted into electrical signals, these signals are available for processing and manipulation. Preprocessing  Deals with applying corrections to the raw data  This techniques are intended to correct the raw data collected from bad detector elements that would create problems in the proper functioning of the detector.  Flat-field image – image that is immediately obtained initially from the detector. o It may contain artifact due to the bad detector elements  It can be corrected through flat-fielding o It is a correction process is popularly referred to as system calibration. o it is an essential requirement to ensure detector performance integrity. Postprocessing  Address the appearance of the image displayed on a monitor for viewing and interpretation by a radiologist.  The image is converted into the “for presentation” image that has better contrast.  First step – exposure recogntion, segmentation of the pre processed raw data. o These two steps find the image data by using image processing algorithms and histogram analysis to identify the minimal and maximal useful values according to the histogram-specific shape generated by the anatomy  Next step - scaling the histogram o This is based on the exposure falling on the detector to correct under or overexposure.  Last Step - contrast enhancement
  • 130.
    o This iswhere the adjusted or scaled raw data values are mapped to the “for presentation” values to display an image with optimum contrast and brightness Histogram  A histogram is a graphic representation of a data set.  A data set includes all the pixel values that represent the image before edge detection and rescaling.  The graph represents the number of digital pixel values versus the relative prevalence of the pixel values in the image.  The x-axis represents the amount of exposure, and the y-axis represents the incidence of pixels for each exposure level.  The stored histogram models have values of interest (VOI)  Value of Interest (VOI) – determines the range of the histogram data set that should be included in the displayed image Histogram Analysis  It is the process where the computer analyzes the histogram using processing algorithms and compares it with a pre-established histogram specific to the anatomic part being imaged.  This is an image processing technique commonly used to identify the edges of the image and assess the raw data prior to image display.  All four edges of a collimated field should be recognized but in the event that atleast three edges are not identified, all data, including raw exposure or scatter outside the field, may be included in the histogram, resulting in a histogram analysis error. o Histogram analysis error would result if there is a failure to find the collimation edges can result in incorrect data collection. o This is why centering and the alignment of the anatomy to the imaging plate is important since histogram analysis is only performed on the data within the exposure field.  It ensures that appropriate recorded intensities are located  Misalignment may cause an error and may lead to incorrect exposure indicators Exposure Field Recognition  This is one of the most important pre-processing method in CR.  It may also be referred to as exposure data recognition (Fujifilm Medical Systems) and segmentation (Carestream).  The purpose of exposure recognition is to identify the appropriate raw data values to be used for image grayscale rendition and to provide an indication of the average radiation exposure to the IP CR detector. o the collimation edges or boundaries are detected, and anatomical structures that should be displayed in the image are identified using specific algorithms
  • 131.
    Automatic rescaling  Itmeans that images are produced with uniform density and contrast, regardless of the amount of exposure.  This is employed to maintain consistent image brightness despite overexposure or underexposure of the IR. o The computer rescales the image based on the comparison of the histograms, which is actually a process of mapping the grayscale to the VOI to present a specific display of brightness.  Rescaling errors occur for a variety of reasons and can result in poor-quality digital images. Look-Up Table  It provides a method of altering the image to change the display of the digital image in various ways and a means to alter the brightness and grayscale of the digital image using computer algorithms. o They are also sometimes used to reverse or invert image grayscale.  Lookup tables provide the means to alter the original pixel values to improve the brightness and contrast of the image  Digital radiographic imaging systems utilize a wide range of LUTs stored in the system, for the different types of clinical examinations. o The technologist should therefore select the appropriate LUT to match the part being imaged. Windowing  It is intended to change the contrast and brightness of an image.  Window Width is used to change the contrast of the image  Window Level or center is used to change the image brightness o It sets the midpoint of the range of brightness visible in the image. Brightness  the brightness level displayed on the computer monitor can be easily altered to visualize the range of anatomic structures recorded.  This is accomplished by the windowing function thru changing the window level (center) o Changing the window level on the display monitor allows the image brightness to be increased or decreased throughout the entire range. o A high pixel value could represent a volume of tissue that attenuated fewer x-ray photons and is displayed as a decreased brightness level.
  • 132.
     Moving thewindow level up to a high pixel value increases visibility of the darker anatomic regions (e.g., lung fields) by increasing overall brightness on the display monitor. o A low pixel value represents a volume of tissue that attenuates more x-ray photons and is displayed as increased brightness.  To visualize better an anatomic region represented by a low pixel value, one would decrease the window level to decrease the brightness on the display monitor.  Relationship: A direct relationship exists between window level and image brightness on the display monitor. o Increasing the window level increases the image brightness; decreasing the window level decreases the image brightness. Contrast  The number of different shades of gray that can be stored and displayed by a computer system is termed grayscale.  Contrast resolution is used to describe the ability of the imaging system to distinguish between objects that exhibit similar densities. o The contrast resolution of a pixel is determined by the bit depth or number of bits which affects the number of shades of gray available for image display  Window width is a control that adjusts the radiographic contrast  In digital imaging, an inverse relationship exists between window width and image contrast. o A narrow (decreased) window width displays higher radiographic contrast, whereas a wider (increased) window width displays lower radiographic contrast. Smoothing  Also known as low pass filtering  It occurs by averaging each pixel’s frequency with surrounding pixel values to remove high- frequency noise.  a postprocess ing technique that suppresses image noise (quantum noise). o the output image noise is reduced, and the image sharpness is compromised  Low-pass filtering is useful for viewing small structures such as fine bone tissues. Edge Enhancement  Also known as high-pass filtering  The high-pass filter suppresses the low frequencies, and the result is a much sharper image than the original. o Suppressing frequencies, also known as masking  result in the loss of small details
  • 133.
    o If toomuch Edge enhancement is used, it produces image noise and creates the “halo” effect in your image.  It occurs when fewer pixels in the neighborhood are included in the signal average. o The smaller the neighborhood, the greater the enhancement.  A post processing technique that improves the visibility of small high-contrast structures.  High-pass filtering is useful for enhancing large structures like organs and soft tissues Exposure Indicators  It provides a numeric value indicating the level of radiation exposure to the digital IR.  In CR, the exposure indicator value represents the exposure level to the imaging plate, and the values are vendor specific.  it is a useful tool to address the problem of “exposure creep” o Exposure creep is the use a higher exposure than is normally required for a particular examination.  Exposure indicators have been standardized, and details of the International Electrotechnical Commission (IEC) and the AAPM of such standardized exposure indicator  If the exposure indicator value is within the acceptable range, adjustments can be made for contrast and brightness with postprocessing functions, and this will not degrade the image. However, if the exposure is outside of the acceptable range, attempting to adjust the image data with postprocessing functions would not correct for improper receptor exposure and may result in noisy or suboptimal images that should not be submitted for interpretation. Vendor Specific Exposure Indicator  Fuji and Konica use sensitivity (S) numbers, and the value is inversely related to the exposure to the plate. o a low exposure will result in a high S-number, and a high exposure will result in a low S- number. o S-number can be thought of as being equivalent to the speed of the IP  If the exposure is low, the speed is increased hence the S-number is large and the image will be noisy  If the exposure is high, the speed will be decreased and the image is very good, but at the expense of higher dose to the patient.  Carestream (Kodak) uses exposure index (EI) numbers; the value is directly related to the exposure to the plate, and the changes are logarithmic expressions o a high exposure will result in a high EI and a low exposure will generate in a low EI.  Agfa uses log median (lgM) numbers; the value is directly related to exposure to the plate, and changes are also logarithmic expressions. o The value is directly related to exposure to the plate, and changes are also logarithmic expressions.
  • 134.
    Dose Area product It is the quantity that reflects not only the dose but also the volume of tissue irradiated.  It is used to monitor the radiation output and dose to the patient, per volume of tissue irradiated  It is by a radiolucent measuring device positioned near the x-ray source, below the collimator and in front of the patient.  It is an indicator of risk and is expressed as cGy-cm^2  There is no standard DAP for digital radiography yet  DAP and field size - Directly proportional to each other o Increasing the field size would increase DAP thus increasing risk  It increases the risk because small amount of tissue is exposed. o Decreasing the field size would decrease DAP thus decreasing risk  It decreases the risk because small amount of tissue is exposed.
  • 135.
    Basic Principles ofDigital Radiography Digital Radiography • It is any image acquisition process that produces an electronic image that can be viewed and manipulated on a computer. • This means both computed radiography and digital radiography • In radiology, the term was first used in 1970’s for CT. Digital Image Characteristics • A digital image is a matrix of picture elements or pixels. – A matrix is a box of cells with numeric value arranged in rows and columns. – The numeric value represents the level of brightness or intensity at that location in the image. – An image is formed by a matrix of pixels. The size of the matrix is described by the number of pixels in the rows and columns. Picture Elements or Pixels • Each pixel in the matrix is capable of representing a wide range of shades of gray from white to black. • It is the basic unit of a digital image • Pixel Size = Field of view ( ) FOV /matrix size • Measured in x and y direction • Each pixel contains a specific value, the value contained in each pixel represents the part of a certain anatomy of the part being imaged. – This value is because of the process of the Analog to Digital Converter which is called (quantization) Pixel Density • It refers to the number of pixels present on a digital image • Increasing pixel density would increase the resolution of our digital image Pixel Pitch • It is the distance between center of one pixel and the center of an adjacent pixel. • Pixel Pitch Determines the pixel density of the digital image – Rationale: Decreasing pixel pitch would increase pixel density. Increasing pixel pitch would decrease pixel density. • Relationship: Increasing the pixel density and decreasing the pixel pitch increases spatial resolution. Decreasing pixel density and increasing pixel pitch decreases spatial resolution.
  • 136.
    Bit depth • Thenumber of bits per pixel that determines the shade of the pixel • Bit depth is expressed as 2 to the power of n • Most digital radiography systems use an 8, 10, or 12 bit depth • This is the number of bits that determines the amount of precision in digitizing the analog signal and Therefore the number of shades of gray that can be displayed in the image. • It is determined by the analog to-digital converter • Relationship: Increasing the number of shades of gray available to display on a digital image improves its contrast resolution. o An image with increased contrast resolution increases the visibility of recorded detail and the ability to distinguish among small anatomic areas of interest. Matrix • A square arrangement of numbers of columns and rows • In Digital Imaging, it corresponds to discrete values. • Each box within the matrix corresponds to – A specific location in the image – A specific area of the patient’s tissue. Field of View • It describes how much of the patient is imaged in the matrix. • The larger the FOV, the greater the amount of body part is included in the image • The matrix size and the FOV are independent. – Changes in either the FOV or the matrix size will change the pixel size. Relationship of FOV, Matrix size and Pixel Size • Field of View and Pixel Size = Directly proportional to each other
  • 137.
    – If theFOV increases and the matrix size remains the same, the pixel size increases • Matrix Size and Pixel Size = Inversely proportional to each other – If the FOV remains the same and the matrix size changes, the pixel size changes • Pixel Size and Spatial Resolution = Inversely proportional to each other – If Pixel size decreases Spatial resolution increases. Matrix size and Imaging Plate. • If the spatial resolution is fixed, the image matrix size is simply proportional to the IP size – Rationale: A larger IP has a larger matrix to maintain spatial resolution. • If the matrix size is fixed, changing the size of the IP would affect the spatial resolution of the digital image. For example decreasing the size of the IP from 14 x 17 to 10 x 12 would increase spatial resolution. – Rationale: Spatial resolution is improved because in order to maintain the same matrix size and number of pixels, the pixels must be smaller in size. The smaller the pixels the greater our spatial resolution. • In summary: For a fixed matrix size CR system, using a smaller IP for a given field of view (FOV) results in improved spatial resolution of the digital image. Increasing the size of the IP for a given FOV results in decreased spatial resolution. Detective Quantum Efficiency • It is a measure of how efficient a digital detector can convert the X-rays collected from the patient into a useful image. • It is a measure of the efficiency and fidelity with which the detector can perform this task. – Note: DQE also takes into consideration not only the signal-to-noise ratio (SNR) but also the system noise • The DQE for CR is much better than for film-screen systems. • The DQE for a perfect digital detector is 1 or 100% meaning there was no loss of information. • Relationship: As spatial frequencies increase, the DQE decreases rapidly Exposure Index • The exposure index is a measure of the amount of exposure on the image receptor. • In screen film radiography it is clear if the image receptor is too bright or too dark. • In digital radiography the image brightness can be altered. Spatial Resolution • The ability of the imaging system to allow two adjacent structures to be visualized as being separate or distinctness of an image to the image.
  • 138.
    • It refersto the smallest object that can be detected in an image and is the term typically used in digital imaging. • It is described as the ability of an imaging system to accurately display objects in two dimensions. • Note: In film/screen imaging the crystal size and thickness of the phosphor layer determine resolution; in digital imaging pixel size will determine resolution. Spatial frequency • It describes the spatial resolution of the image and is expressed in line pairs/mm • It does not refer to the size of the image but to the line pair • An imaging system with higher spatial frequency has better spatial resolution Modular Transfer function • It is a complex mathematical function that measures the ability of the detector to transfer its spatial resolution characteristics to the image. • Measures the ability of the system to preserve signal contrast. • An MTF of 1 represents a perfect transfer of spatial and contrast information • Ideal expression of digital detector image resolution • Higher MTF values with Higher Spatial Frequencies (such as small objects) will show better spatial resolution • Higher MTF values with Low Spatial Frequency (such as larger objects) will show better contrast resolution. Contrast Resolution • It is the ability to distinguish shades of gray from black to white. • It is used to describe the ability of an imaging receptor to distinguish between objects having similar subject contrast. • All digital imaging systems have a greater contrast resolution than screen film radiography. • The principal descriptor for contrast resolution is called the dynamic range Dynamic Range • It refers the number of gray shades that an imaging system can produce. • It refers to the ability of the detector to capture accurately the range of photon intensities that exit the patient. • The dynamic range of digital imaging system is identified by the bit capacity of each pixel. • Digital IR has a large exposure latitude (wider dynamic range) – It means that a small degree of underexposure or overexposure would still result in acceptable image quality. – It does not mean a quality image is always created. • Typical digital systems will respond to exposures as low as 100 µR and as high as 100mR
  • 139.
    Exposure Latitude • Therange of exposure techniques that provides quality image at an appropriate patient exposure. • This is one of the biggest advantage of CR/DR because of its exposure latitude. o It eliminates the possibility of retakes. • Over exposure of 500 % and 80% of under exposure is recoverable. Signal to Noise Ratio • It is a method of describing the strength of the radiation exposure compared with the amount of noise apparent in a digital image. • Increasing the SNR improves the quality of the digital image. – Note: Increasing the SNR means that the strength of the signal is high compared with the amount of noise, and therefore image quality is improved. • Image noise limits contrast resolution of the digital image. Digital Receptors Indirect capture and Direct Capture • Indirect capture converts the incoming x-ray photons first to light then electrical signal to the final digital image. • Direct capture converts the incoming x-ray photons directly to electrical signals. o Note: Both of them use Flat Panel Detectors. Flat Panel Detectors • Solid-state IRs that are constructed with layers in order to receive the x-ray photons and convert them to electrical charges for storage and readout . • Signal storage, signal readout, and digitizing electronics are integrated into the flat panel device. • The first layer is composed of the x-ray converter, the second layer houses the thin-film transistor (TFT) array, and the third layer is a glass substrate. • TFT array is divided into square detector elements (DEL), each having a capacitor to store electrical charges and a switching transistor for readout • Flat panel systems are highly dose-efficient and provide quicker access to images compared with CR and film-screen. • The Spatial resolution of flat panel receptors is generally superior to the spatial resolution of CR. o Note: A system that uses a smaller DEL size has improved spatial resolution. Indirect: Thin Flat Panel • It is composed of three layers: Scintillation Layer, Photodiode layer and TFT array
  • 140.
    Scintillation Layer • Thisis the layer wherein x-rays are converted into a small burst of light energy. • It is either made from Cesium Iodide or Gadolinium Oxysulfides – Cesium Iodide is preferred material of the Scintillation layer because of its structure. Cesium iodide has a needlike structure (structured phosphor) that produces more focused light beam that increases spatial resolution compared to your gadolinium oxysulfide that has a more turbid structure (powdered particles) that can produce lateral spreading of light decreasing the spatial resolution of the image. If cesium iodide ang ginamit as the scintillation layer, it is called the structured scintillator and if gadolinium its unstructured scintillator. Photodiode Layer • This layer converts the incoming light photons into electric charge. • It is usually made made up of Amorphous Silicon (Amorphous = “without shape or form”) • The process of making this layer is known as plasma-enhanced chemical vapor deposition where the amorphous silicon are deposited in a glass substrate because Amorphous silicon is a silicon that is not in crystalline but is a fluid that can be painted onto a supporting surface such as your glass substrate. TFT layer • The electrical charges are temporarily stored by capacitors in the TFT array before being digitized and processed in the computer. • It’s is comprised of an array or matrix of digital elements (DEL) • Detector elementshas three components the TFT, the capacitor, and the sensing area. o Sensing area receives the data from the layer above it that captures X-rays that are converted to light (indirect flat-panel detectors) or electrical charges (direct flat-panel detectors). o Capacitor – temporarily store the electrical charge o TFT/Switch – acts to open and close the analog signal leaving each del. • Each DEL is comprised of a capture element or pixel detector which is the active element within each DEL Fill Factor • This an important feature of the pixel in the flat-panel TFT digital detector active matrix array. • It is the ability of each DEL to produce a high spatial resolution is designated as the percentage of active pixel area within each DEL. • It is expressed as Fill factor= Sensing area of the pixel /Area of the pixel • The larger the sensing area of each detector element, the more radiation/light can be detected thus the greater the amount of signal being generated. • The fill factor affects both the spatial resolution and contrast resolution
  • 141.
    – Rationale: Detectorswith high fill factors (large sensing areas) will provide better spatial and contrast resolution than detectors with low fill factors (small sensing area). Charged Couple Device • It was developed in the 1970’s as a highly sensitive device for military use. • It is the oldest indirect conversion radiography system to acquire a digital image. • It is a silicon-based semiconductor • CCD has three principal advantages. o Sensitivity - has a high sensitivity to respond to very low levels of visible light o Dynamic Range - detects wide range of light intensity from very dim light to intense light o Size - 1 to 2 cm with pixel sizes of 100um x 100 um • The image has to be matched to the size of the CCD, which implies that the image must be reduced in size using fiberoptics or lense system. • X-ray interacts with the scintillation material w/ scintillation material and the signal is transmitted by lenses or fiberoptics to the charged couple device. • During transmission process the lenses reduces the size of the projected visible light image to one or more capacitor that convert light to electric signal. Structure and Function • A CCD is made up of a photosensitive receptor and electronic substrate material in a silicon chip. • The chip is made up of poly silicon layer, a silicon dioxide and a silicon substrate. – Polysilicon is layer coated with a photosensitive material and contains the electron gate. – Silicon dioxide acts as an insulator – Silicon Substrate contains the charge storage area and works like a capacitor. • Detector Elements contains three electrodes that holds electrons in an electric potential well. • These DELs are formed by voltage gates that at read out are opened and closed like gates to allow flow of electrons. • In collecting the charge on the silicon chips, there is a need to change the voltage sign on the electrodes within each DEL Bucket brigade scheme • It refers to the systematic collection of changes on each chip. • The charge from each pixel in a row is transferred to the next row and subsequently down all the columns to the final readout row. • There are issues in using CCD that it can cause the blooming effect. – Blooming effect is the overfill of detector elements • This can be fixed by contruction of overflow drains in the detector elements. Complementary Metal Oxide Semiconductors • This was developed by NASA • It is highly efficient and takes up less fill spaces than charged couple device
  • 142.
    • It isa semiconductor that conducts electricity in some conditions but not others. – It is a good medium for the control of electrical current • Semiconductor materials do not conduct electricity well on their own . – Impurities and dopants , added to increase conductivity. • Typical semiconductor materials are: – Antimony, arsenic, boron, carbon, germanium, silicon, sulfur – Silicon is the most common semiconductors in integrated circuits • Common dopants of silicon – Gallium arsenide, indium antimonide and oxides of most metals. • When the semiconductors are doped they become a full-scale conductor with extra electrons becoming negative charge or positive charge carriers. • CMOS image sensors convert light to electrons that are stored in the capacitor located at each pixel. • CMOS each detector element is isolated from its neighborhood and is directly connected to the transistor. • During readout, the charge is sent across the chip and read at one corner of the array. • An ADC turns the pixel value into a digital value. Direct Digital Capture/ Non Scintillation • Semiconductor – Amorphous Selenium • It is the both the capture element and coupling element • the thickness of the amorphous selenium is relatively high approximately 200 um thick and sandwiched between charged electrons. – It is thick to compensate for the low atomic number of Selenium which is (34) • An electrical field is applied across the selenium layer to limit lateral diffusion of electrons as they migrate toward the thin-film transistor array. By this means, excellent spatial resolution is maintained
  • 143.
    Principles of ComputedRadiography Computed Radiography Terms: ● IP – Imaging Plate ● PD – Photodiode ● PMT – Photomultiplier tube ● PSL – Photostimulable Luminescence ● PSP – Photostimulable phosphor ● SP – Storage Phosphor ● SPS – Storage Phosphor screen Computed Radiography ● a “cassette-based” system that uses a special solid-state detector plate instead of a film inside a cassette. ● The use of CR requires the CR cassettes and phosphor plates, the CR readers and technologist quality control workstation, and a means to view the images, either a printer or a viewing station. ● The image plate can be reused and erased thousands of times. Computed Radiography System Three major components ● Phosphor Imaging Plates or also known as Photostimulable Phosphor Plate or Screens/ Imaging Plates in general o To acquire x-ray image projections from your patient ● PIP Reader (scanner)/ CR reader or Phosphor imaging plate reader or also known as Computed Radiography Reader o To extract the electronic latent image PIP – read by a CR Reader- the CR reader is to extract electronic latent image from your imaging plate; it also removes the use for the darkroom; daylight ● Workstation o For pre and post processing of the image. o Can edit everything that you want to improve (ex. Density, contrast, and etc…) Cassettes • 14 x 170 commonly used • 10 x 12 • 8 x 10
  • 144.
    • 11 x14 • 14 x 14 Historical Perspective ● 1973 – George Luckey, a research scientist filed a patent application titles Apparatus and Method for Producing Images Corresponding to Patterns of High Energy Radiation. ● 1975 – George Luckey patent (USD 3,859,527) was approved and Kodak patented the first scanned storage phosphor system that gave birth to modern computed radiography. ● 1980’s – a lot of companies applied for a patent in George Luckey’s invention. ● 1983 – Fuji Medical Systems was the first to commercialize and complete the CR system ● Structure and Mechanism ● The CR cassette contains a solid-state plate called a photostimulable storage phosphor imaging plate (PSP) or (IP) that responds to radiation by trapping energy in the locations where the x- rays strike. ● The first system consisted of a phosphor storage plate, a reader, and a laser printer to print the image onto film. Patent- license to be able to reproduce commercially a product of a specific person. Structure and Mechanism Photostimulable Luminescence ● It refers to the emission of light after stimulation of a relevant light source or when exposed to a different light source ● Characteristics of a imaging plate is to released stored energy within a phosphor by stimulation of another light source to produce luminescent light signal ● When phosphors inside the imaging plate is stimulated it will produce light again. ● Examples of phosphor’s that exhibit this property – Barium Fluorohalide doped with europium (BaFBr:Eu or BaFl:Eu) o Note: Europium is present in only small amount since it allows the electrons to be trapped more effectively Imaging Plate ● It is house in a rugged cassette that appears similar to screen- film cassette. ● It is handled in the same manner as a screen film cassette. ● Like the tradional x-ray film it also has several layers. Imaging plate layers
  • 145.
    ● Overcoat/ Protectivelayer: This is a very thin, tough, clear plastic that protects the phosphor layer from handling trauma. Made up of Fluorinated Polymer material to protect the phosphor layer. ● Active/ Phosphor layer: This is the active layer. This is the layer of photostimulable phosphor that traps electrons during exposure. It is typically made of barium fluorohalide phosphors. It may contain light absorbing dye o The dye differentially absorbs the stimulating light to prevent as much spread as possible. Thickness: 100-250 mm thick Example: Barium flourohalide and Barium Flourobromide; this are activated by europium; the europium help efficiently to trap the electrons inside the phosphor layer. Barium flourobromide: 3-10 mm in size, randomly position and held together by a binder Imaging plate structure- turbid structure or mud-like structure ● Reflective layer – This is a layer that sends light in a forward direction when released in the cassette reader. This layer may be black to reduce the spread of stimulating light and the escape of emitted light. ● Conductive layer: This layer grounds the plate to reduce static electricity problems and to absorb light to increase sharpness. ● Support layer: This is a semirigid material that provides the imaging sheet with strength and is a base for coating the other layers. It is made up of Poly Ethylene Teraphtalate ● Backing layer: This is a soft polymer that protects the back of the cassette. The radiation dose from a CR. Made up of lead foil to protect the IP from backscatter radiation Imaging Plate ● Cassettes contain barcode label on the cassette or on the imaging plate through a window in a cassette. ● Label enables technologist to match information with patient identifying barcode. CR Image Processing 1. When the Photostimulable phosphor or the (PSP) screen is exposed to x-rays, energy will be absorbed by the phosphor crystals. 2. After the exposure the IP is inserted into a CR reader 3. The IP is processed by a scanning system or reader which 1. Extracts the PSP screen from the cassette
  • 146.
    2. Moves thescreen across a high intensity scanning laser beam 3. Blue violet light is emitted via PSL 4. Light energy is read by the photomultiplier tube, in which converts the light into an electric signal. 4. The electronic signal is converted into a digital format for manipulation, enhancement, viewing and printing if desired. 5. The PSP screen is erased by a bright white light inside the reader, reloaded in the cassette and is ready for the next exposure. ● The white light dumps all the remaining excess energy traps allowing the plates to be reused Computed Radiography Acquisition Latent Image Formation 1. The incident x-ray beam interacts with the photostimulable phosphors that are in the active layer of the imaging plate. 2. The x-ray energy is absorbed by the phosphor and the absorbed energy excites the europium atoms, causing ionization to the europium atoms. 3. The electrons are raised to higher energy state and are trapped in a so called phosphor center in a metastable state Note: ● The number of trapped electrons per unit area is proportional to intensity of X-rays and these trapped electrons constitute the latent image. ● Due to the thermal motion the trapped electrons will slowly be liberated from the traps, and so at room temperature the image should, however, be readable up to 8 hours after exposure. Visible Image Formation 1. Special CR Cassette is inserted to the CR Reader 2. The CR Reader automatically extracts the imaging plate from the special cassette 3. A finely focused beam of infrared light with a beam diameter of 50 to 100um is directed to the PSP.
  • 147.
    ▪ The smallerthe diameter the laser beam the greater the spatial resolution of the system. 4. The energy of the laser light is absorbed at the phosphor centers and the trapped electrons are released. 5. The released electrons are absorbed by the europium atoms that will release Blue violet Light from europium (photostimulable luminescence) 6. Analog to Digital Converter (ADC) will converts released light energy to a digital signal. 7. Digital Signal is reconstructed by the computer system with special soft wares into a grayscale image that can be seen on the monitor or printed. 8. Imaging plate is scanned with a high intensity to remove any excess energy. The CR Reader ● The CR reader is composed of mechanical, optical and computer modules. Mechanical Features ● When the CR cassette is inserted into the CR reader, the IP is removed and is fitted to a precision drive mechanism. There are two scan directions ● Fast Scan o The movement of the laser across the imaging plate o Aka “scan” ● Slow Scan o The movement of the imaging plate through the reader o Aka “translation or sub-scan direction” Optical Features ● Components of the optical subsystem include the laser, beam shaping optics, light collecting optics, optical filters and a photo detector. ● The laser is used as the source of stimulating light that spreads as it travels to the rotating or oscillating reflector. o Laser stands for Light amplification by stimulated emission of radiation o a wavelength of 633 nm (or 670 to 690 nm for solid state laser diodes) ● The laser /light beam is focused to a reflector by a lens system that keeps the laser diameter about 100um o Smaller laser beams are critical to produce images with high spatial resolution.
  • 148.
    ● Using SpecialBeam optics allows the shape of the beam to be constant size, shape, speed and intensity. o Without Special beam optics the shape of the laser would be more angled the and more elliptical resulting to differing spatial resolution and inconsistent output signals ● The laser beam is deflected across the IP. o The laser scans across the imaging plate multiple times is called translation o Translations speed of the plate must be coordinated with scan direction of the laser or the spacing of the scan lines will be affected. ● The reader scans the plate with red light in a zigzag or raster pattern ● The emitted light from the IP is channeled into a funnel of fiber optic collection assembly and is directed at the photo detector, PMT and Photodiode and send it to the ADC. Computer Control ● The output of the photodetector/photomultplier tube is a time varying analog signal that is transmitted to a computer system that has multiple functions. ● The analog signal is processed for amplitude scale and compression. ● It shapes the final signal before the image is formed. ● The analog signal is digitized in consideration of proper sampling and quantization. ● The image buffer usually is a hard disc. This is the place where a completed image can be stored temporarily until it is transferred to a workstation for interpretation or to an archival computer. Visible Image Formation 1. Special CR Cassette is inserted to the CR Reader/ Digitizer 2. The Digitizer automatically extracts the imaging plate from the special cassette 3. A finely focused beam of infrared light with a beam diameter of 50 to 100um is directed to the PSP. 4. The energy of the laser light is absorbed at the phosphor centers and the trapped electrons are released. 5. The released electrons are absorbed by the europium atoms that will release Blue violet Light from europium (photostimulable luminescence) 6. Analog to Digital Converter (ADC) will converts released light energy to a digital signal. 7. Digital Signal is reconstructed by the computer system with special soft wares into a grayscale image that can be seen on the monitor or printed. 8. Imaging plate is scanned with a high intensity to remove any excess energy.
  • 149.
    Direct Digital Radiography ●Most digital radiography (cassette-less) systems use an x-ray absorber material coupled to a flat panel detector or a charged coupled device (CCD) to form the image. ● DR uses an array of small solid state detectors to convert incident x-ray photons to directly forms the digital image. ● DR system is that no handling of a cassette is required as this is a “cassette-less” system. Direct Digital Radiography ● DR can be divided into two categories: Indirect capture and direct capture. ● Indirect capture digital radiography devices absorb x-rays and convert them into light. ● Direct capture converts the incident x-ray energy directly into an electrical signal. Both the conventional and CR system uses cassette and the difference between them is whats inside the cassette Sa Coventional: RADIOGRAPHIC FILM CR: REUSABLE IMAGING PLATE IMAGING PLATES IN GENERAL IS ALSO KNOWN AS "PSP" - PHOTOSTIMULABLE STORAGE PHOSPHOR PLATE/SCREEN OR IP IMAGING PLATE "PIP" - PHOSPHOR IMAGING PLATE READER - REMOVES THE USE OF DARKROOM 14X17 - MOST USED 10X12 8X10 11X14 14X14 RADIOGRAPHIC FILM IS THINNER WHILE IMAGING PLATE IS MORE RIGID IN STRUCTURE PROTECTIVE LAYER AKA OVERCOAT - USUALLY MADE UP OF FLUORINATED POLYMER
  • 150.
    PHOSPHOR LAYER AKAACTIVE LAYER - THICKNESS IS 100-250 mm thick example: barium fluorohalide phosphors barium fluorobromide phosphor usually activated by europium because it helps to efficiently trap electrons inside of phosphor layer PHOSPHOR SIZE: 3-10 MICROMETER IN SIZE AND RANDOMLY POSITIONED BY A BINDER IT HAS A TURBID STRUCTURE, MUD LIKE STRUCTURE IT ALSO CONTAINS LIGHT ABSORBING DYE - PREVENTS LIGHT SPREAD MAKING THE DETAIL MUCH GREATER REFLECTIVE LAYER ISOTROPIC EMISSION - IN ALL DIRECTION ANG EMISSION CONDUCTIVE LAYER - ABSORBS WEAK LIGHT SUPPORT LAYER AKA BASE - MADE UP OF POLYETHYLENE TEREPHTHALATE BACKING LAYER - MADE UP OF ALUMINUM PHOTOSTIMULABLE LUMINESCENCE - "PSL" ADC ANALOG DIGITAL CONVERTER PHOSPHOR CENTER OR F CENTER THE NUMBER OF TRAPPED ELECTRONS PER UNIT AREA IS PROPORTIONAL TO THE INTENSITY OF XRAYS AT EACH LOCATION DUE TO THERMAL MOTION THE TRAP ELECTRONS WILL SLOWLY BE LIBERATED OR FREE 8 HRS AFTER EXPOSURE AT ROOM TEMPERATURE TENDENCY MAG FADE ANG IMAGE PHOTODETECTOR PHOTOMULTIPLIER TUBE PHOTODIODE
  • 151.
    AMOUNT OF LASERLIGHT THAT SPREAD INCREASES AS THE THICKNESS OF PHOSPHOR LAYER INCREASES NEWER SYSTEM SOLID STATE LAYER WAVELENGTH 633 nanometer; solid state 670 to 690 nanometer in wavelength process of scanning laser back and forth is called translation coordinated with scan direction of laser to avoid artifact laser light has an energy 2 electron volts and light produced is 3 electron volts smaller light diameter - good spatial resolution SAMPLING FREQUENCY - DETERMINES HOW OFTEN ANALOG SIGNAL IS PRODUCED IN ITS DISCRETE DIGITIZED FORM INCREASE FREQUENCY WILL INCREASE PIXEL DENSITY IN DIGITAL IMAGE THUS IMPROVED THE SPATIAL RESOLUTION