Quality Assurance Programme in Computed TomographyRamzee Small
Introduction to Computed Tomography
Basic description of the components of a CT System
Introduction to Quality Assurance
Quality Assurance and Quality Control Tests in Computed Tomography base on frequency
Objective of QA/QC Test
Image Quality, Artifacts and it's Remedies in CT-Avinesh ShresthaAvinesh Shrestha
CT is one of the frequently used diagnostic imaging modalities in Radiology. Knowledge about image quality and artifacts is essential when diagnosing a patient with the help of CT images. Moreover, Radiology Technologist's should be very well aware about the ways to identify and eliminate or minimize the artifacts in CT for better image quality.
Quality Assurance Programme in Computed TomographyRamzee Small
Introduction to Computed Tomography
Basic description of the components of a CT System
Introduction to Quality Assurance
Quality Assurance and Quality Control Tests in Computed Tomography base on frequency
Objective of QA/QC Test
Image Quality, Artifacts and it's Remedies in CT-Avinesh ShresthaAvinesh Shrestha
CT is one of the frequently used diagnostic imaging modalities in Radiology. Knowledge about image quality and artifacts is essential when diagnosing a patient with the help of CT images. Moreover, Radiology Technologist's should be very well aware about the ways to identify and eliminate or minimize the artifacts in CT for better image quality.
Basic physics of multidetector computed tomography ( CT Scan) - how ct scan works, different generations of ct, how image is generated and displayed and image artifacts related to CT Scan.
MDCT Principles and Applications- Avinesh ShresthaAvinesh Shrestha
Multidetector CT (MDCT) is one of the most commonly used imaging modality in the field of Radiology. Development and advancement in MDCT has made it's application as a major component in diagnosis and treatment planning of multitude of disease across the planet. This presentation briefly describes its basic principle and it's wide variety of application in medical imaging.
Basic physics of multidetector computed tomography ( CT Scan) - how ct scan works, different generations of ct, how image is generated and displayed and image artifacts related to CT Scan.
MDCT Principles and Applications- Avinesh ShresthaAvinesh Shrestha
Multidetector CT (MDCT) is one of the most commonly used imaging modality in the field of Radiology. Development and advancement in MDCT has made it's application as a major component in diagnosis and treatment planning of multitude of disease across the planet. This presentation briefly describes its basic principle and it's wide variety of application in medical imaging.
Computed tomography (CT scan) is a medical imaging procedure that uses computer-processed X-rays to produce tomographic images or 'slices' of specific areas of the body. These cross-sectional images are used for diagnostic and therapeutic purposes in various medical disciplines.
The resolution and performance of an optical microscope can be characterized by a quantity known as the modulation transfer function (MTF), which is a measurement of the microscope's ability to transfer contrast from the specimen to the intermediate image plane at a specific resolution.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationIJTET Journal
Abstract— A framework is proposed for reduction of azimuth uncertainty space borne strip map synthetic aperture radar (SAR) images. In this paper, the azimuth uncertainty in SAR images is identified by using a local average SAR image, system parameter, and a distinct metric derived from azimuth antenna pattern. The distinct metric helps isolate targets lying at locations of uncertainty. The method for restoration of uncertainty regions is selected on the basis of the size of uncertainty regions. A compressive imaging technique is engaged to bring back isolated ambiguity regions (smaller regions of interrelated pixels), clustered regions (relatively bigger regions of interrelated pixels) are filled by using exemplar-based in-painting. The recreation results on a real Terra SAR-X data set established that the proposed method can effectively remove azimuth uncertainties and enhance SAR image quality.
Abstract— This paper demonstrates overcoming of the Abbe diffraction limit (ADL) on image resolution. Here, terahertz multispectral reconstructive imaging has been described and used for analyzing nanometer size metal lines fabricated on a silicon wafer. It has also been demonstrated that while overcoming the ADL is a required condition, it is not sufficient to achieve sub-nanometer image resolution with longer wavelengths. A nanoscanning technology has been developed that exploits the modified Beer-Lambert’s law for creating a measured reflectance data matrix and utilizes the ‘inverse distance to power equation’ algorithm for achieving 3D, sub-nanometer image resolution. The nano-lines images reported herein, were compared to SEM images. The terahertz images of 70 nm lines agreed well with the TEM images. The 14 nm lines by SEM were determined to be ~15 nm. Thus, the wavelength dependent Abbe diffraction limit on image resolution has been overcome. Layer-by-layer analysis has been demonstrated where 3D images are analyzed on any of the three orthogonal planes. Images of grains on the metal lines have also been analyzed. Unlike electron microscopes, where the samples must be in the vacuum chamber and must be thin enough for electron beam transparency, terahertz imaging is non-destructive, non-contact technique without laborious sample preparation.
Abstract:
This paper demonstrates overcoming of the Abbe diffraction limit (ADL) on image resolution. Here, terahertz multispectral reconstructive imaging has been described and used for analyzing nanometer size metal lines fabricated on a silicon wafer. It has also been demonstrated that while overcoming the ADL is a required condition, it is not sufficient to achieve sub-nanometer image resolution with longer wavelengths. A nanoscanning technology has been developed that exploits the modified Beer-Lambert’s law for creating a measured reflectance data matrix and utilizes the ‘inverse distance to power equation’ algorithm for achieving 3D, sub-nanometer image resolution. The nano-lines images reported herein, were compared to SEM images. The terahertz images of 70 nm lines agreed well with the TEM images. The 14 nm lines by SEM were determined to be 15 nm. Thus, the wavelength dependent Abbe diffraction limit on image resolution has been overcome. Layer-by-layer analysis has been demonstrated where 3D images are analyzed on any of the three orthogonal planes. Images of grains on the metal lines have also been analyzed. Unlike electron microscopes, where the samples must be in the vacuum chamber and must be thin enough for electron beam transparency, terahertz imaging is non-destructive, non-contact technique without laborious sample preparation.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
2. • SCANNING PARAMETERS
1) milliampere (mA) level
2) scan time
3) slice thickness
4) field of view
5) reconstruction algorithm
6) kilovolt-peak (kVp).
7) pitch in helical scan only.
-The total x-ray beam exposure in CT is dependent on a
combination of mA setting, scan time, and kVp setting.
mA and scan time together are referred to as mAs and
defines the quantity of the x-ray energy. kVp setting
defines the quality (average energy) of the x-ray beam.
3. • Milliampere-Second Setting(mAs):
- Thermonic emission , filament current and tube
current.
- Increasing the mA increases the number of electrons
that will produce x-ray photons.
- Use of a small fi lament size concentrates the focal
spot, reducing the penumbra, but cannot tolerate
↑mA.
- Larger filament→↓resolution.
4. - scan time is the time the x-ray beam is on for the
collection of data for each slice(the time it takes for
the gantry to make a complete 360° rotation).
- Typical choices of scan time for a full rotation range
from 0.5 to 2 seconds, in cardiac CT (0.35 to 0.45
seconds).
- Higher mA settings allow shorter scan times to be
used. A short scan time is critical in avoiding image
degradation as a result of patient motion.
- Used short scan time to avoided involuntary
movement such as peristalsis and cardiac motion.
5. - The degree to which involuntary motion affects an image
is largely dependent on the area scanned.
- ↑mAs→↑ heat produced→need↑ cooling.
- The factors aff ecting the mAs selected for a CT study are
basically the same as in conventional radiography: the
thicker and denser the part being examined, the more
mAs that is required to produce and adequate image.
- Differences in mAs of less than 20% may not result in a
visible change on the image.
- For example 280mAs have Some choices: 0.4 seconds
and 700 mA (280 mAs), 0.6 seconds and 460 mA (276
mAs), 0.8 seconds and 340 mA (272 mAs), 1.0 second
and 280 mA (280 mAs), and 2.0 seconds and 140 mA
(280 mAs).
6. • Tube Voltage or Kilovolt Peak
- In CT, kVp does not change contrast as directly as it does
in film-screen radiography.
- ↑kVp→↑beam intensity→↑penetrability.
- In adult routine exams choose 120 to 140 kVp, in
pediatric 80kVp.
• Impact of mAs and kVp Settings on Radiation Dose
- To ↓radiation dose to the patient:
1) ↓mAs+ kVp constant
2) constant mAs+ ↓kVp
So The appropriate selection of mAs and kVp is critical to
optimize radiation dose to the patient and image
quality.
7. • two reasons to change mAs rather kVp
- First, the choice of mA is more flexible (from 20 to
800 mA)
- effect on image quality is more straightforward and
predictable.
• The Uncoupling Effect
- The relationship between radiation dose and CT image
quality is complex not like FSC (↑kVp+↑mAs→↑Pt
dose” over-exposed”).
- The uncoupling effect does not play a role when the mA
or kVp setting is too low,because quantum noise will
result and provide evidence of the inadequate exposure
settings.
8. • Uncoupling Effect—using digital technology, the
image quality is not directly linked to the dose, so
even when an mA or kVp setting that is too high is
used, a good image results.
• Automatic Tube Current Modulation
- Software that automatically adjusts the tube current
(mAs) to fit specific anatomic regions is increasingly
used in clinical practice.
- These automatic exposure control
techniques report a 15% to 40% reduction in dose,
9. • Slice Thickness
- ↑S.T→↑detail→↑spatial Resolution.
• Field of View
- field of view (SFOV) determines the area, within
the gantry, (DFOV) determines how much, and what
section, of the collected raw data are used to create an
image.
• Reconstruction Algorithms
- By choosing a specific algorithm, the operator selects
how the data are filtered in the reconstruction process.
Filter functions can only be applied to raw data (not
image data). Therefore, to reconstruct an image using a
different filter function, the raw data must be available
for that image. It is important to differentiate
reconstruction algorithms from merely setting a window
width and level.
10. • Pitch
- Pitch is the relationship between slice thickness and
table travel per rotation during a helical scan
acquisition.
• SCAN GEOMETRY
- Another factor is tube arc (180o,360oand 400o ”
overscan “. full scan [360° (full scan) + 40 (typical
field of view) = 400° scan used in 4th generation].
- Overscan→ overlap of data from the first and last
tube positions, reduced motion artifacts.
11. • IMAGE QUALITY DEFINED
- In CT, image quality is directly related to its usefulness in
providing an accurate diagnosis.
- Image quality relates to how well the image represents
the object scanned. However, the true test of the quality
of a specific image is whether it serves the purpose for
which it was acquired.
- The two main features used to measure image quality
are:
Spatial Resolution—the ability to resolve (as separate
objects) small, high-contrast objects.
Contrast Resolution—the ability to differentiate between
objects with very similar densities as their back ground.
12. • SPATIAL RESOLUTION
- Spatial resolution can be measured using two
methods.
1) It can be measured directly
2) it can be calculated from analyzing the spread of
information within the system. This latter data
analysis is known as the modulation transfer
function (MTF).
• Direct Measurement of Spatial Resolution
- Using a line pairs phantom(made of acrylic and
has closely spaced metal strips).
13. - The phantom is scanned, and
the number of strips that are
visible are counted.
- Line pair (line +space).
- if 20 lines can be seen in a 1-
cm section in an image of the
phantom, the spatial
resolution is reported as 20
line pairs per centimeter
(lp/cm).
- Spatial Frequency
Is The number of line pairs
visible per unit length.
14. • Evaluating Spatial Resolution Using the MTF
- Used in Ct also in Conventional radiography.
- It is often used to graphically represent a system’s
capability of passing information to the observer.
- The MTF is the ratio of the accuracy of the image
compared with the actual object scanned.
- If the image reproduced the object exactly,
the MTF of the system would have a value of 1.
- If the image were blank and contained no
information about the object, the MTF would be 0
15. -As expected, this graph
shows that as the size of the
object increases, the MTF also
increases.
-The relationship is not linear;
hence an object twice the size
of another object may not
necessarily possess twice the
image fidelity. (MTF indicates
image fidelity)
16. - The limiting resolution is
the spatial frequency
possible on a given CT
system, at an MTF equal
to 0.1. In this example,
the limiting resolution of
scanner A is 4.3 and
scanner B is 5.0
- Spatial resolution in
conventional radiography
more better than CT.
17. • In-Plane Versus Longitudinal Resolution
- in-plane resolution: the resolution in x-y direction
- Longitudinal resolution : resolution in z direction.
• Factors Affecting Spatial Resolution
- Depending on quality of raw data and the
reconstruction method.
• Matrix Size, Display Field of View, Pixel Size
- Pixel size plays an important role in the in-plane
spatial resolution of an image
- DFOV determines how much raw data will be used to
reconstruct the image.
18. - Pixel size = (DFOV/matrix size)
- If an object is smaller than a
pixel, its density will be
averaged with the density of
other tissues contained in the
pixel, creating a less accurate
image.
- When pixels are smaller, it is
less likely that they will contain
different densities, therefore
decreasing the likelihood of
volume averaging
19.
20. • Slice Thickness
- Thinner slices produce sharper images because to create
an image the system must flatten the scan thickness (a
volume) into two dimensions (a flat image). The thicker
the slice, the more flattening is necessary.
- The matrix divides data into squares with an x and y
dimension. The operator’s selection of slice thickness
accounts for the z axis.
- Slice thickness plays an important role in volume
averaging, thereby affecting spatial resolution in the
image. New CT scanners allow for very thin slice
thickness; often the goal is to produce isotropic voxels.
21. - An isotropic voxel is a cube, measuring the same in the x,
y, and z directions.
- When the imaging voxel is equal in size in all dimensions
there is no loss of information when data are
reformatted in a different plane.
- An isotropic voxel ensures that there is no data
loss with either multiplanar reformation (MPR) or
volume rendering (VR).
- Sampling Teorem(Nyquist Sampling Theorem)
- because an object may not lie entirely within
a pixel, the pixel dimension should be half the size of
the object to increase the likelihood of that object being
resolved.
22. - This theorem accounts for the element of random chance in the creation
of a CT image.
- Random chance plays a role in whether a small
object will be seen on the reconstructed image. In (A), (B), and (C) the
object to be displayed is the same size as the pixel. The three figures show
different scenarios as to how the object could be reconstructed, each
resulting in a different level of volume averaging. In (D) and (E), a smaller
pixel size is used, and the scenarios regarding the likelihood of volume
averaging improve.
23. • Reconstruction Algorithm
- The appropriate reconstruction algorithm depends on
which parts of the data should be enhanced or
suppressed to optimize the image for diagnosis.
Smooth: the data more heavily, by reducing the
difference between adjacent pixels→↓artifact but ↓
spatial resolution.
the internal auditory canal in which the tiny bones of the
inner ear are displayed→ the image can be reconstructed
for spatial rather than contrast fidelity(These types of
high-contrast reconstruction algorithms are often called
bone or detail filters).
- the high-contrast filter produces a noisy effect
24. • Focal Spot Size
- larger focal spots →↑ geometric unsharpness →↓
spatial resolution.
• Pitch
- increasing the pitch reduces resolution.
- The effect in SDCT more than in MDCT systems
because data interpolation.
• Patient Motion
- Motion creates blurring in the image and degrades
spatial resolution ( using minimum time).
25. • Contrast resolution (low-contrast sensitivity or low
contrast detestability):
- Conventional radiography is 5% difference in contrast
from its background material, whereas CT is 0.5%
contrast variation.
- Contrast resolution is measured using phantoms that
contain objects, typically
cylindrical, of varying sizes and
with a small difference in density
(typically from 4 to 10 HU).
26. • Noise
- Noise is caused by the combination of many factors,
the most prevalent being quantum noise, or
quantum mottle.
- Quantum mottle occurs when there are an
insufficient number of photons detected.
- In CT, the number of x-ray photons detected per
pixel is also often referred to as signal-to-noise ratio
(SNR)
27. • Factors Aff ecting Contrast Resolution:
1) mAs/Dose
- Doubling the mAs of the study increases the SNR by
40% →↓quantum noise but The dose increases linearly
with mAs per scan.
2) Pixel Size
- as pixel size decreases, the number of detected x-ray
photons per pixel will decrease→↑noise.
3) Slice Thickness
- . Because thicker slices allow more photons to reach
the detectors they have a better SNR and appear less
noisy.
28. 4) Reconstruction Algorithm
- bone algorithms produce lower contrast resolution (but
better spatial resolution), whereas soft tissue
algorithms improve contrast resolution at the xpense of
spatial resolution.
5) Patient Size
- larger patients attenuate more x-rays photons, leaving
fewer to reach the detectors. This reduces SNR,
increases noise, and results in lower contrast resolution.
• Other Contrast Resolution Considerations
- small objects are more difficult to see than larger objects.
- The relationship between object size and visibility is called the
contrast-detail response.
29. • TEMPORAL RESOLUTION
- The temporal resolution of a system refers to how
rapidly data are acquired.
- Temporal resolution is controlled by gantry rotation
speed, the number of detector channels in the
system, and the speed with which the system can
record changing signals.
- High temporal resolution is of particular importance
when imaging moving structures (e.g., heart) and for
studies dependent on the dynamic flow of iodinated
contrast media (e.g., CT angiography, perfusion
studies).