The media that digital images are stored on have large capacities
and erasable / reusable (equivalent to many rolls of film), are
durable and would not degrade physically or chemically over time.
You can attach descriptions to digital images, dates and times to
help you organize them into folders that work like digital photo
Digital images can be enhanced, altered, reproduced, inserted into
creative projects, and shared over the Internet.
But, the resolution in digital images may not be as good as those in films, especially enlarged images taken by low cost digital cameras. The resolution problems have been improving over time. Viewing images on the digital cameras’ screens consume much battery power, although the battery can be recharged. Digital (electronic) vs. Traditional (film, analog) cameras ( continue )
2. The Digital Sensors Instead of film, a digital camera has a sensor that converts light into electrical charges. The image sensor employed by most digital cameras is a c harge c oupled d evice (CCD). Some cameras use c omplementary m etal o xide s emiconductor (CMOS) technology instead. Both CCD and CMOS image sensors convert light into electrons. Both CCD and CMOS imagers are manufactured in a silicon foundry; the equipment used is similar. But alternative manufacturing processes and device architectures make the imagers quite different in both capability and performance. For example:
CCD sensors create high-quality, low-noise images. CMOS censors are
generally more susceptible to noise.
CCD sensors have been mass produced for a longer period of time, so
they are more mature. They tend to have higher quality pixels, and
Because each pixel on a CMOS sensor has several transistors
located next to it, the light sensitivity of a CMOS chip is lower. Many
of the photons hit the transistors instead of the photodiode.
CMOS sensors traditionally consume little power.
CCDs, on the other hand, use a process that consumes lots of power,
as much as 100 times more power than an equivalent CMOS sensor.
A little more details on the technologies of CCD and CMOS imagers
and their future potentials are provided in the following. Then, we will
discuss about their resolutions and look at how the camera adds color
to their images.
2.1 CCD imagers Developed in the 1970s and 1980s specifically for imaging applications, CCD technology and fabrication processes were optimized for the best possible optical properties and image quality. The technology continues to improve and is still the choice in applications where image quality is the primary requirement or market share factor. A CCD sensor A CCD comprises photosites, typically arranged in an X-Y matrix of rows and columns. Each photosite, in turn, comprises a photodiode and an adjacent charge holding region, which is shielded from light. The photodiode converts light (photons) into charge (electrons). The number of electrons collected is proportional to the light intensity. Typically, light is collected over the entire imager simultaneously and then transferred to the adjacent charge transfer cells within the columns. Interline transfer CCD
Next, the charge is read out: each row of data is moved to a separate horizontal charge transfer register. Charge packets for each row are read out serially and sensed by a charge-to-voltage conversion and amplifier section (see image below). This architecture produces a low-noise, high-performance imager. That optimization, however, makes integrating other electronics onto the silicon impractical. In addition, operating the CCD requires application of several clock signals, clock levels, and bias voltages, complicating system integration and increasing power consumption, overall system size, and cost. CMOS and CCD Sensor Architectures
A CMOS imager, on the other hand, is made with standard silicon processes in high-volume foundries. Peripheral electronics, such as digital logic, clock drivers, or analog-to-digital converters, can be readily integrated with the same fabrication process. CMOS imagers can also benefit from process and material improvements made in mainstream semiconductor technology. A CMOS image sensor To achieve these benefits, the CMOS sensor’s architecture is arranged more like a memory cell or flat-panel display. Each photosite contains a photodiode that converts light to electrons, a charge-to-voltage conversion section, an amplifier section, a reset and select transistor. Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout signals, and an array of column output signal interconnects. The column lines connect to a set of decode and readout (multiplexing) electronics that are arranged by column outside of the pixel array. This architecture allows the signals from the entire array, from subsections, or even from a single pixel to be readout by a simple X-Y addressing technique—something a CCD can’t do.
The biggest opportunities for CMOS sensors lie in new product
categories for which they are uniquely suited. Keys to their success are
Lower power usage
Integration of additional circuitry on-chip
Lower system cost
Such features make CMOS sensors ideal for mobile, multifunction
products like Kodak’s mc3 or imaging attachments like the PalmPix.
Still, if CMOS sensors offer all of these benefits, why haven’t they
completely displaced CCDs? There are a number of reasons; some are
technical or performance related, and others are related more with the
growing maturity of the technology. CCDs have been mass-produced
for over 25 years whereas CMOS technology has only just begun the
mass production phase. Rapid adoption was also hindered because
some early implementations of these devices were disappointing: they
delivered poor imaging performance and poor image quality.
CMOS imaging technology needs to be further developed to the point where it could deliver quality images before introducing commercial products. Scientists and engineers are applying the optical science and image processing experience derived from more than 25 years of work with CCD sensors and digital cameras to develop and characterize CMOS sensors—and to define modifications in standard CMOS manufacturing lines and equipment to make low-noise, good-quality sensors. Understanding and accounting for numerous process trade-offs has enabled engineers to create CMOS devices that deliver the leading imaging performance. As the next figure shows the current sensor market divides itself into two areas: the high-performance, low-volume branch, and the low-cost, high- volume branch. In the high-performance branch are applications that will continue to be dominated by CCD technology, but CMOS technology will find market share too, especially for lower cost or more portable versions of these products. The second area is where most of the CMOS activity will be. Here, in many applications CCD sensors will be replaced with CMOS sensors. These could include some security applications, biometrics and most consumer digital cameras.
Most of the growth, though, will likely come from products that can employ imaging technology—automotive, computer video, optical mice, imaging phones, toys, bar code readers and a host of hybrid products that can now include imaging. These kinds of products will require millions of CMOS sensors.
The function to be sampled The sampled function
2.3 Digital quantizations / sampling
While almost all object information in real world we want to take pictures of
are analog, CCD and CMOS sensors quantize a picture into many pixels ( spatial quantization ). The brightness of each pixel is also quantized into many levels and represented by a string of 0 and 1 ( brightness quantization ).
Essentially, an image after being quantized spatially and in brightness becomes
a long string of 0s and 1s. Computers works with strings of 0s and 1s.
Quantizations are the results of sampling .
Any analog (spatial or brightness) function can be decomposed into its Fourier frequency components. Shannon’s sampling theorem says that if the function is sampled at least twice per cycle of the highest frequency component, that original function can always be retrieved. Resolution The size of the pixel is called resolution; higher resolution provides more details of an image. The more pixels a camera has, the pictures can be enlarged without becoming blurry or grainy. However, the camera resolution or the number of pixel in a picture needs not exceed what a camera lens can resolve. The resolution of a camera lens is inversely proportional to the diameter of the lens.
Resolution The size of the pixel is called resolution; higher resolution provides more details of an image. The more pixels a camera has, the pictures can be enlarged without becoming blurry or grainy. Some typical resolutions include: 256x256 - Found on very cheap cameras, this resolution is so low that the picture quality is almost always unacceptable. This is 65,000 total pixels. 640x480 - This is the low end on most "real" cameras. This resolution is ideal for e-mailing pictures or posting pictures on a Web site. 1216x912 - This is a "megapixel" image size -- 1,109,000 total pixels – good for printing pictures. 1600x1200 - With almost 2 million total pixels, this is "high resolution." You can print a 4x5 inch print taken at this resolution with the same quality that you would get from a photo lab. 2240x1680 - Found on 4 megapixel cameras -- the current standard -- this allows even larger printed photos, with good quality for prints up to 13.5x9 inches. 4064x2704 - A top-of-the-line digital camera with 11.1 megapixels takes pictures at this resolution. At this setting, you can create 16x20 inch prints with no loss of picture quality.
You may have noticed that the number of pixels and the maximum resolution don't quite compute. For example, a 2.1-megapixel camera can produce images with a resolution of 1600x1200, or 1,920,000 pixels. But "2.1 megapixel" means there should be at least 2,100,000 pixels. This isn't an error from rounding off or binary mathematical trickery. There is a real discrepancy between these numbers because the CCD has to include circuitry for the ADC to measure the charge. This circuitry is dyed black so that it doesn't absorb light and distort the image. High-end consumer cameras can capture over 12 million pixels. Some professional cameras support over 16 million pixels, or 20 million pixels for large-format cameras. For comparison, Hewlett Packard estimates that the quality of 35mm film is about 20 million pixels .
Unfortunately, each photosite is colorblind. It only keeps track of the total
intensity of the light that strikes its surface. In order to get a full color image,
most sensors use filtering to look at the light in its three primary colors.
Once the camera records all three colors, it combines them to create the
There are several ways of recording the three colors in a digital camera.
The highest quality cameras use three separate sensors, each with a
different filter. A beam splitter directs light to the different sensors. Think
of the light entering the camera as water flowing through a pipe. Using a
beam splitter would be like dividing an identical amount of water into three
different pipes. Each sensor gets an identical look at the image; but because
of the filters, each sensor only responds to one of the primary colors.
The advantage of this method is that the camera records each of the three colors at each pixel location. Unfortunately, cameras that use this method tend to be bulky and expensive. Another method is to rotate a series of red, blue and green filters in front of a single sensor. The sensor records three separate images in rapid succession. This method also provides information on all three colors at each pixel location; but since the three images aren't taken at precisely the same moment, both the camera and the target of the photo must remain stationary for all three readings. This isn't practical for candid photography or handheld cameras. Both of these methods work well for professional studio cameras, but they're not necessarily practical for casual snapshots. Next, we'll look at filtering methods that are more suited to small, efficient cameras.
A more economical and practical way to record the primary colors is to permanently place a filter called a color filter array over each individual photosite. By breaking up the sensor into a variety of red, blue and green pixels, it is possible to get enough information in the general vicinity of each sensor to make very accurate guesses about the true color at that location. This process of looking at the other pixels in the neighborhood of a sensor and making an educated guess is called interpolation . The most common pattern of filters is the Bayer filter pattern . This pattern alternates a row of red and green filters with a row of blue and green filters. The pixels are not evenly divided -- there are as many green pixels as there are blue and red combined. This is because the human eye is not equally sensitive to all three colors. It's necessary to include more information from the green pixels in order to create an image that the eye will perceive as a "true color."
The advantages of this method are that only one sensor is required, and all the color information (red, green and blue) is recorded at the same moment. That means the camera can be smaller, cheaper, and useful in a wider variety of situations. The raw output from a sensor with a Bayer filter is a mosaic of red, green and blue pixels of different intensity. Digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colors. The key is that each colored pixel can be used more than once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels. Some single-sensor cameras use alternatives to the Bayer filter pattern. X3 technology , for example, embeds red, green and blue photo- detectors in silicon. Some of the more advanced cameras subtract values using the typesetting colors cyan, yellow, green and magenta instead of blending red, green and blue. There is even a method that uses two sensors. However, most consumer cameras on the market today use a single sensor with alternating rows of green/red and green/blue filters.
2. Digital Storage and Display in Digital Cameras Most digital cameras have an LC screen , so you can view your picture right away. This is one of the great advantages of a digital camera -- you get immediate feedback on what you capture. Of course, viewing the image on your camera would lose its charm if that's all you could do. You want to be able to load the picture into your computer or send it directly to a printer. There are several ways to do this. Early generations of digital cameras had fixed storage inside the camera. You needed to connect the camera directly to a computer with cables to transfer the images. Although most of today's cameras are capable of connecting through serial , parallel , SCSI , USB or FireWire connections, they usually also use some sort of removable storage device . Photo courtesy HSW Shopper A CompactFlash card Photo courtesy HSW Shopper A memory stick
Digital cameras use a number of storage systems. These are like reusable , digital film, and they use a caddy or card reader to transfer the data to a computer. Many involve fixed or removable flash memory . Digital camera manufacturers often develop their own proprietary flash memory devices, including SmartMedia cards, CompactFlash cards, Memory Sticks and SD cards. Some other removable storage devices include: Floppy disks Hard disks , or microdrives Writeable CDs and DVDs No matter what type of storage they use, all digital cameras need lots of room for pictures. They usually store images in one of two formats -- TIFF , (Tagged Image File Format) which is uncompressed, and JPEG ( Joint Photographic Experts Group ) , which is compressed. Most cameras use the JPEG file format for storing pictures, and they sometimes offer quality settings (such as medium or high). The following chart will give you an idea of the file sizes you might expect with different picture sizes.
To make the most of their storage space, almost all digital cameras use some sort of data compression to make the files smaller. Two features of digital images make compression possible. One is repetition . The other is irrelevancy . Imagine that throughout a given photo, certain patterns develop in the colors. For example, if a blue sky takes up 30 percent of the photograph, you can be certain that some shades of blue are going to be repeated over and over again. When compression routines take advantage of patterns that repeat, there is no loss of information and the image can be reconstructed exactly as it was recorded. Unfortunately, this doesn't reduce files any more than 50 percent, and sometimes it doesn't even come close to that level. Irrelevancy is a trickier issue. A digital camera records more information than the human eye can easily detect. Some compression routines take advantage of this fact to throw away some of the more meaningless data.